[ 
https://issues.apache.org/jira/browse/SYSTEMML-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805329#comment-15805329
 ] 

Mike Dusenberry edited comment on SYSTEMML-686 at 1/6/17 7:11 PM:
------------------------------------------------------------------

Copying from GitHub [PR 320 | 
https://github.com/apache/incubator-systemml/pull/320]:

[~niketanpansare]:
{quote}
Regarding distributed implementation of convolution and max pooling, there are 
following options:

# Current approach (Mini-batch training): for loop \{ CP conv + CP max_pool \}
# Batched approach (applicable for batched training as well as batched 
prediction): for loop \{ SPARK conv + SPARK max_pool \}
# Parfor Prediction: remote SPARK parfor loop \{ CP conv + CP max_pool \}

@dusenberrymw Which case do you think we should address first ?
{quote}



was (Author: [email protected]):
Copying from GitHub [PR 
320](https://github.com/apache/incubator-systemml/pull/320)

[~niketanpansare]:
{quote}
Regarding distributed implementation of convolution and max pooling, there are 
following options:

# Current approach (Mini-batch training): for loop \{ CP conv + CP max_pool \}
# Batched approach (applicable for batched training as well as batched 
prediction): for loop \{ SPARK conv + SPARK max_pool \}
# Parfor Prediction: remote SPARK parfor loop \{ CP conv + CP max_pool \}

@dusenberrymw Which case do you think we should address first ?
{quote}


> Implement Spark instructions for convolution and pooling functions
> ------------------------------------------------------------------
>
>                 Key: SYSTEMML-686
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-686
>             Project: SystemML
>          Issue Type: Task
>            Reporter: Niketan Pansare
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to