[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-08-05 Thread LI Guobao (JIRA)


 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features  [optional]: validation features matrix
 * val_labels  [optional]: validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  [optional] (default: BATCH) (options: EPOCH, BATCH) : the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default: 64): the size of batch, if the 
update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default: number of vcores, otherwise vcores / 2 if 
using openblas): the degree of parallelism
 * scheme  [optional] (default: disjoint_contiguous) (options: 
disjoint_contiguous, disjoint_round_robin, disjoint_random, overlap_reshuffle): 
the scheme of data partition, i.e., how the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing [optional] (default: NONE) (options: NONE, EPOCH, 
EPOCH10) : the checkpoint strategy, we could set a checkpoint for each epoch or 
each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  [optional] (default: BATCH) (options: EPOCH, BATCH) : the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default: 64): the size of batch, if the 
update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default: number of vcores, otherwise vcores / 2 if 
using openblas): the degree of parallelism
 * scheme  [optional] (default: disjoint_contiguous) (options: 
disjoint_contiguous, disjoint_round_robin, disjoint_random, overlap_reshuffle): 
the scheme of data partition, i.e., how the data is distributed across workers
 * hyperparams  [optional]: a list consisting 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-25 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  [optional] (default: BATCH) (options: EPOCH, BATCH) : the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default: 64): the size of batch, if the 
update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default: number of vcores, otherwise vcores / 2 if 
using openblas): the degree of parallelism
 * scheme  [optional] (default: disjoint_contiguous) (options: 
disjoint_contiguous, disjoint_round_robin, disjoint_random, overlap_reshuffle): 
the scheme of data partition, i.e., how the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing [optional] (default: NONE) (options: NONE, EPOCH, 
EPOCH10) : the checkpoint strategy, we could set a checkpoint for each epoch or 
each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default value: number of vcores, otherwise vcores / 
2 if using openblas): the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-25 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default value: number of vcores, otherwise vcores / 
2 if using openblas): the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default value: number of vcores): the degree of 
parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-25 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k  [optional] (default value: number of vcores): the degree of 
parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-25 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH) [optional] (default value: BATCH): the 
frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP) [optional] (default value: BSP): the 
updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-18 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model=paramsList, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-16 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional] (default value: 64): the size of batch, if 
the update frequence is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-14 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="LOCAL", utype="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparams=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: LOCAL, REMOTE_SPARK): the execution backend where 
the parameter is executed
 * utype  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="BSP", freq="BATCH", epochs=100, 
batchsize=64, k=7, scheme="disjoint_contiguous", hyperparams=params, 
checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-14 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="BSP", freq="BATCH", epochs=100, 
batchsize=64, k=7, scheme="disjoint_contiguous", hyperparams=params, 
checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparams  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="BSP", freq="BATCH", epochs=100, 
batchsize=64, k=7, scheme="disjoint_contiguous", hyperparam=params, 
checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, features=X, labels=Y, val_features=X_val, 
val_labels=Y_val, upd="fun1", agg="fun2", mode="BSP", freq="BATCH", epochs=100, 
batchsize=64, k=7, scheme="disjoint_contiguous", hyperparam=params, 
checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * features : training features matrix
 * labels : training label matrix
 * val_features : validation features matrix
 * val_labels : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, Y, X_val, Y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * Y : training label matrix
 * X_val : validation features matrix
 * Y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, Y, X_val, Y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="BATCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * Y : training label matrix
 * X_val : validation features matrix
 * Y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize  [optional]: the size of batch, if the update frequence 
is "EPOCH", this argument will be ignored
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpointing  (options: NONE(default), EPOCH, EPOCH10) [optional]: 
the checkpoint strategy, we could set a checkpoint for each epoch or each 10 
epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpointing="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpoint="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme="disjoint_contiguous", 
hyperparam=params, checkpoint="NONE"){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> mode="BSP", 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

*Output*:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

Output:
 * model' : a list consisting of the updated weight and bias matrices


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> mode="BSP", 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

Output:
 * model' : a list consisting of the updated weight and bias matrices

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> mode="BSP", freq="EPOCH", epochs=100, batchsize=64, k=7, 
> scheme=disjoint_contiguous, 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model : a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam  [optional]: a list consisting of the additional hyper 
parameters, e.g., learning rate, momentum
 * checkpoint  (options: NONE(default), EPOCH, EPOCH10) [optional]: the 
checkpoint strategy, we could set a checkpoint for each epoch or each 10 epochs 

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model  [: a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam : a list consisting of the additional hyper parameters, 
e.g., learning rate, momentum
 * checkpoint  (options: NONE, EPOCH, EPOCH10): the checkpoint 
strategy, we could set a checkpoint for each epoch or each 10 epochs 


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> mode="BSP", freq="EPOCH", epochs=100, batchsize=64, k=7, 
> scheme=disjoint_contiguous, hyperparam=params, checkpoint=NONE){code}
> We are interested in providing the model (which will be a struct-like data 
> 

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", mode="BSP", 
freq="EPOCH", epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism, the data partition scheme, a list of 
additional hyper parameters, as well as the checkpointing strategy. And the 
function will return a trained model in struct format.

*Inputs*:
 * model  [: a list consisting of the weight and bias matrices
 * X : training features matrix
 * y : training label matrix
 * X_val : validation features matrix
 * y_val : validation label matrix
 * upd : the name of gradient calculation function
 * agg : the name of gradient aggregation function
 * mode  (options: BSP, ASP, SSP): the updating mode
 * freq  (options: EPOCH, BATCH): the frequence of updates
 * epochs : the number of epoch
 * batchsize : the size of batch
 * k : the degree of parallelism
 * scheme  (options: disjoint_contiguous, disjoint_round_robin, 
disjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how 
the data is distributed across workers
 * hyperparam : a list consisting of the additional hyper parameters, 
e.g., learning rate, momentum
 * checkpoint  (options: NONE, EPOCH, EPOCH10): the checkpoint 
strategy, we could set a checkpoint for each epoch or each 10 epochs 

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 

 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
freq=EPOCH, epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
 

We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd="fun1", agg="fun2", 
> mode="BSP", freq="EPOCH", epochs=100, batchsize=64, k=7, 
> scheme=disjoint_contiguous, hyperparam=params, checkpoint=NONE){code}
> We are interested in providing the model (which will be a struct-like data 
> structure consisting of the weights, the biases and the hyperparameters), the 
> training features and labels, the validation features and labels, the batch 
> update function (i.e., gradient calculation func), the update strategy (e.g. 
> sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch 
> or mini-batch), the gradient aggregation function, the number of epoch, the 
> batch size, the degree of parallelism, the data partition scheme, a list of 
> additional hyper parameters, as well as the checkpointing strategy. And the 
> function will return a trained model in struct format.
> *Inputs*:
>  * model  [: a list consisting of the weight and bias matrices
>  * X : training features matrix
>  * y : training label matrix
>  * X_val : validation features matrix
>  * y_val : validation label matrix
>  * upd : the name of gradient calculation function
>  * agg : the name of gradient aggregation function
>  * mode  (options: BSP, ASP, SSP): the updating mode
>  * freq  

[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 

 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
freq=EPOCH, epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
hyperparam=params, checkpoint=NONE){code}
 

We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.

  was:
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 

 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
freq=EPOCH, epochs=100, batchsize=64, k=7, hyperparam=params, 
checkpoint=NONE){code}
 

We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
>  
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
> freq=EPOCH, epochs=100, batchsize=64, k=7, scheme=disjoint_contiguous, 
> hyperparam=params, checkpoint=NONE){code}
>  
> We are interested in providing the model (which will be a struct-like data 
> structure consisting of the weights, the biases and the hyperparameters), the 
> training features and labels, the validation features and labels, the batch 
> update function (i.e., gradient calculation func), the update strategy (e.g. 
> sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch 
> or mini-batch), the gradient aggregation function, the number of epoch, the 
> batch size, the degree of parallelism as well as the checkpointing strategy 
> (e.g. rollback recovery). And the function will return a trained model in 
> struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-13 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: 
The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be: 

 
{code:java}
model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
freq=EPOCH, epochs=100, batchsize=64, k=7, hyperparam=params, 
checkpoint=NONE){code}
 

We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.

  was:The objective of “paramserv” built-in function is to update an initial or 
existing model with configuration. An initial function signature would be 
_model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
interested in providing the model (which will be a struct-like data structure 
consisting of the weights, the biases and the hyperparameters), the training 
features and labels, the validation features and labels, the batch update 
function (i.e., gradient calculation func), the update strategy (e.g. sync, 
async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.


> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be: 
>  
> {code:java}
> model'=paramserv(model, X, y, X_val, y_val, upd=fun1, agg=fun2, mode=BSP, 
> freq=EPOCH, epochs=100, batchsize=64, k=7, hyperparam=params, 
> checkpoint=NONE){code}
>  
> We are interested in providing the model (which will be a struct-like data 
> structure consisting of the weights, the biases and the hyperparameters), the 
> training features and labels, the validation features and labels, the batch 
> update function (i.e., gradient calculation func), the update strategy (e.g. 
> sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch 
> or mini-batch), the gradient aggregation function, the number of epoch, the 
> batch size, the degree of parallelism as well as the checkpointing strategy 
> (e.g. rollback recovery). And the function will return a trained model in 
> struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-11 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function (i.e., gradient calculation func), the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
mini-batch), the gradient aggregation function, the number of epoch, the batch 
size, the degree of parallelism as well as the checkpointing strategy (e.g. 
rollback recovery). And the function will return a trained model in struct 
format.  (was: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
gradient aggregation function, the number of epoch, the batch size, the degree 
of parallelism as well as the checkpointing strategy (e.g. rollback recovery). 
And the function will return a trained model in struct format.)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting of the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function (i.e., gradient calculation func), the update strategy (e.g. sync, 
> async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
> mini-batch), the gradient aggregation function, the number of epoch, the 
> batch size, the degree of parallelism as well as the checkpointing strategy 
> (e.g. rollback recovery). And the function will return a trained model in 
> struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-11 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model (which will be a struct-like data 
structure consisting of the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
gradient aggregation function, the number of epoch, the batch size, the degree 
of parallelism as well as the checkpointing strategy (e.g. rollback recovery). 
And the function will return a trained model in struct format.  (was: The 
objective of “paramserv” built-in function is to update an initial or existing 
model with configuration. An initial function signature would be 
_model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
interested in providing the model (which will be a struct-like data structure 
consisting the weights, the biases and the hyperparameters), the training 
features and labels, the validation features and labels, the batch update 
function, the update strategy (e.g. sync, async, hogwild!, stale-synchronous), 
the update frequency (e.g. epoch or mini-batch), the gradient aggregation 
function, the number of epoch, the batch size, the degree of parallelism as 
well as the checkpointing strategy (e.g. rollback recovery). And the function 
will return a trained model in struct format.)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting of the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
> gradient aggregation function, the number of epoch, the batch size, the 
> degree of parallelism as well as the checkpointing strategy (e.g. rollback 
> recovery). And the function will return a trained model in struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-09 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Due Date: 16/May/18  (was: 17/May/18)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
> gradient aggregation function, the number of epoch, the batch size, the 
> degree of parallelism as well as the checkpointing strategy (e.g. rollback 
> recovery). And the function will return a trained model in struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-09 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Due Date: 17/May/18  (was: 21/May/18)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
> gradient aggregation function, the number of epoch, the batch size, the 
> degree of parallelism as well as the checkpointing strategy (e.g. rollback 
> recovery). And the function will return a trained model in struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-09 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model (which will be a struct-like data 
structure consisting the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
gradient aggregation function, the number of epoch, the batch size, the degree 
of parallelism as well as the checkpointing strategy (e.g. rollback recovery). 
And the function will return a trained model in struct format.  (was: The 
objective of “paramserv” built-in function is to update an initial or existing 
model with configuration. An initial function signature would be 
_model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
interested in providing the model (which will be a struct-like data structure 
consisting the weights, the biases and the hyperparameters), the training 
features and labels, the validation features and labels, the batch update 
function, the update strategy (e.g. sync, async, hogwild!, stale-synchronous), 
the update frequency (e.g. epoch or mini-batch), the gradient aggregation 
function, the number of epoch, the batch size, the degree of parallelism as 
well as the checkpointing strategy (e.g. rollback recovery). And the function 
will return a trained model in format of struct.)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
> gradient aggregation function, the number of epoch, the batch size, the 
> degree of parallelism as well as the checkpointing strategy (e.g. rollback 
> recovery). And the function will return a trained model in struct format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-09 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model (which will be a struct-like data 
structure consisting the weights, the biases and the hyperparameters), the 
training features and labels, the validation features and labels, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
gradient aggregation function, the number of epoch, the batch size, the degree 
of parallelism as well as the checkpointing strategy (e.g. rollback recovery). 
And the function will return a trained model in format of struct.  (was: The 
objective of “paramserv” built-in function is to update an initial or existing 
model with configuration. An initial function signature would be 
_model'=paramserv(model, X, y, X_val, y_val, g_cal_fun, upd=fun1, mode=SYNC, 
freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. 
We are interested in providing the model, the training features and labels, the 
validation features and labels, the gradient calculation function, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or batch), the aggregation 
function, the number of epoch, the batch size, the degree of parallelism as 
well as the checkpointing strategy (e.g. rollback recovery).)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, upd=fun1, mode=SYNC, freq=EPOCH, 
> agg=fun2, epochs=100, batchsize=64, k=7, checkpointing=rollback)_. We are 
> interested in providing the model (which will be a struct-like data structure 
> consisting the weights, the biases and the hyperparameters), the training 
> features and labels, the validation features and labels, the batch update 
> function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or mini-batch), the 
> gradient aggregation function, the number of epoch, the batch size, the 
> degree of parallelism as well as the checkpointing strategy (e.g. rollback 
> recovery). And the function will return a trained model in format of struct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-05 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature 
would be _model'=paramserv(model, X, y, X_val, y_val, g_cal_fun, upd=fun1, 
mode=SYNC, freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, 
checkpointing=rollback)_. We are interested in providing the model, the 
training features and labels, the validation features and labels, the gradient 
calculation function, the batch update function, the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
batch), the aggregation function, the number of epoch, the batch size, the 
degree of parallelism as well as the checkpointing strategy (e.g. rollback 
recovery).  (was: The objective of “paramserv” built-in function is to update 
an initial or existing model with configuration. An initial function signature 
is illustrated in Figure 1. We are interested in providing the model, the 
training features and labels, the validation features and labels, the gradient 
calculation function, the batch update function, the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
batch), the aggregation function, the number of epoch, the batch size, the 
degree of parallelism as well as the checkpointing strategy (e.g. rollback 
recovery).)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature would be 
> _model'=paramserv(model, X, y, X_val, y_val, g_cal_fun, upd=fun1, mode=SYNC, 
> freq=EPOCH, agg=fun2, epochs=100, batchsize=64, k=7, 
> checkpointing=rollback)_. We are interested in providing the model, the 
> training features and labels, the validation features and labels, the 
> gradient calculation function, the batch update function, the update strategy 
> (e.g. sync, async, hogwild!, stale-synchronous), the update frequency (e.g. 
> epoch or batch), the aggregation function, the number of epoch, the batch 
> size, the degree of parallelism as well as the checkpointing strategy (e.g. 
> rollback recovery).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-05 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature is 
illustrated in Figure 1. We are interested in providing the model, the training 
features and labels, the validation features and labels, the gradient 
calculation function, the batch update function, the update strategy (e.g. 
sync, async, hogwild!, stale-synchronous), the update frequency (e.g. epoch or 
batch), the aggregation function, the number of epoch, the batch size, the 
degree of parallelism as well as the checkpointing strategy (e.g. rollback 
recovery).  (was: The objective of “paramserv” built-in function is to update 
an initial or existing model with configuration. An initial function signature 
is illustrated in Figure 1. We are interested in providing the model, the 
training features and labels, the validation features and labels, the batch 
update function, the update strategy (e.g. sync, async, hogwild!, 
stale-synchronous), the update frequency (e.g. epoch or batch), the aggregation 
function, the number of epoch, the batch size, the degree of parallelism as 
well as the checkpointing strategy (e.g. rollback recovery).)

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature is 
> illustrated in Figure 1. We are interested in providing the model, the 
> training features and labels, the validation features and labels, the 
> gradient calculation function, the batch update function, the update strategy 
> (e.g. sync, async, hogwild!, stale-synchronous), the update frequency (e.g. 
> epoch or batch), the aggregation function, the number of epoch, the batch 
> size, the degree of parallelism as well as the checkpointing strategy (e.g. 
> rollback recovery).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SYSTEMML-2299) API design of the paramserv function

2018-05-05 Thread LI Guobao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LI Guobao updated SYSTEMML-2299:

Description: The objective of “paramserv” built-in function is to update an 
initial or existing model with configuration. An initial function signature is 
illustrated in Figure 1. We are interested in providing the model, the training 
features and labels, the validation features and labels, the batch update 
function, the update strategy (e.g. sync, async, hogwild!, stale-synchronous), 
the update frequency (e.g. epoch or batch), the aggregation function, the 
number of epoch, the batch size, the degree of parallelism as well as the 
checkpointing strategy (e.g. rollback recovery).

> API design of the paramserv function
> 
>
> Key: SYSTEMML-2299
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2299
> Project: SystemML
>  Issue Type: Sub-task
>Reporter: LI Guobao
>Assignee: LI Guobao
>Priority: Major
>
> The objective of “paramserv” built-in function is to update an initial or 
> existing model with configuration. An initial function signature is 
> illustrated in Figure 1. We are interested in providing the model, the 
> training features and labels, the validation features and labels, the batch 
> update function, the update strategy (e.g. sync, async, hogwild!, 
> stale-synchronous), the update frequency (e.g. epoch or batch), the 
> aggregation function, the number of epoch, the batch size, the degree of 
> parallelism as well as the checkpointing strategy (e.g. rollback recovery).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)