[ 
https://issues.apache.org/jira/browse/SINGA-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006250#comment-15006250
 ] 

ASF subversion and git services commented on SINGA-80:
------------------------------------------------------

Commit 32e09219129a1dba359ed02760754e1c63e1480f in incubator-singa's branch 
refs/heads/master from [~flytosky]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=32e0921 ]

SINGA-80 New Blob Level and Address Level Math Operation Interface

Move cpu_asum from Blob class outside to Asum(const Blob<Dtype>&).
Asum and Scale are implemented using cblas_sasum and cblas_sscale, i.e., only 
consider the float type.


> New Blob Level and Address Level Math Operation Interface
> ---------------------------------------------------------
>
>                 Key: SINGA-80
>                 URL: https://issues.apache.org/jira/browse/SINGA-80
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: Jinyang Gao
>            Assignee: Jinyang Gao
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> We are going to provide a new two level math interface to replace the current 
> mshadow. The higher blob level interface is going to be used by layer level.  
> It is xpu transparent, and will support general matrix, element-wise, 
> reduce/expand, pack/unpack operations and etc. in blob level. There is no 
> further need to transfer the blob object into tensor object before math 
> operation. The lower address level interface is going to support efficient 
> cpu/gpu computing task on simple data array. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to