[ 
https://issues.apache.org/jira/browse/SYSTEMML-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hu updated SYSTEMML-1809:
-----------------------------
    Description: 
For the current version, the distributed MNIST_LeNet_Sdg model training can be 
optimized from the following aspects: 
          # Optimize the DML scripts with considering the backend engine, such 
as the intermediate matrixes are exported to HDFS, so we need to avoid 
unnecessary intermediate matrixes.
          # Improve the efficiency of matrix subsetting
          # Data locality: for {{RemoteParForSpark}}, the tasks are 
parallelized without considering data locality. It will cause a lot of data 
shuffling if the volume of the input data size is large; 
          # Result merge: the current experiments indicate that the result 
merge part took more time than model training. 

After the optimization, we need to compare the performance with the distributed 
Tensorflow.  

  was:
For the current version, the distributed MNIST_LeNet_Sdg model training can be 
optimized from the following aspects: 
          # Optimize the DML scripts with considering the backend engine, such 
as the intermediate matrixes are exported to HDFS, so we need to avoid 
unnecessary intermediate matrixes.
          # Data locality: for {{RemoteParForSpark}}, the tasks are 
parallelized without considering data locality. It will cause a lot of data 
shuffling if the volume of the input data size is large; 
          # Result merge: the current experiments indicate that the result 
merge part took more time than model training. 

After the optimization, we need to compare the performance with the distributed 
Tensorflow.  


> Optimize the performance of the distributed MNIST_LeNet_Sgd model training
> --------------------------------------------------------------------------
>
>                 Key: SYSTEMML-1809
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1809
>             Project: SystemML
>          Issue Type: Task
>    Affects Versions: SystemML 1.0
>            Reporter: Fei Hu
>            Assignee: Fei Hu
>              Labels: RemoteParForSpark, deeplearning
>
> For the current version, the distributed MNIST_LeNet_Sdg model training can 
> be optimized from the following aspects: 
>           # Optimize the DML scripts with considering the backend engine, 
> such as the intermediate matrixes are exported to HDFS, so we need to avoid 
> unnecessary intermediate matrixes.
>           # Improve the efficiency of matrix subsetting
>           # Data locality: for {{RemoteParForSpark}}, the tasks are 
> parallelized without considering data locality. It will cause a lot of data 
> shuffling if the volume of the input data size is large; 
>           # Result merge: the current experiments indicate that the result 
> merge part took more time than model training. 
> After the optimization, we need to compare the performance with the 
> distributed Tensorflow.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to