[
https://issues.apache.org/jira/browse/SINGA-131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228181#comment-15228181
]
ASF subversion and git services commented on SINGA-131:
-------------------------------------------------------
Commit 040cbb2e121164268284c4a8d005bd9aea83f40c in incubator-singa's branch
refs/heads/master from WANG Sheng
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=040cbb2 ]
SINGA-131 Implement and optimize hybrid training using both CPU and GPU
update test files
checked with cpplint
> Implement and optimize hybrid training using both CPU and GPU
> -------------------------------------------------------------
>
> Key: SINGA-131
> URL: https://issues.apache.org/jira/browse/SINGA-131
> Project: Singa
> Issue Type: Improvement
> Reporter: wangwei
> Labels: CPU, GPU, hybrid
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> We discussed with researchers from Stanford on implementing hybrid training
> before
> http://mail-archives.apache.org/mod_mbox/singa-dev/201507.mbox/%3CCAJz0iLsd5iSCqqVU4QHLKzMO2o%2BFt-40kN8RgWkYhDn%3D6Qqqbw%40mail.gmail.com%3E.
> Now with the GPU training supported, we can move on to this feature.
> The distributed training framework is natural for hybrid training with CPU
> and GPU. The first n workers would be assigned with GPU cards (n is the
> number of cards configured by users), and the rest workers would run on CPU.
> Some code may need updates and optimization to consider the memory
> transferring between GPU workers and CPU workers. Most of them is in
> worker.cc, param.cc and stub.cc.
> Automatically Tuning the workload among GPU and CPU could be designed and
> implemented in this ticket or a new ticket.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)