[
https://issues.apache.org/jira/browse/SINGA-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15961915#comment-15961915
]
Muhammad Hamdan commented on SINGA-308:
---------------------------------------
I tried that, but got SegFault!
MemPoolConf mem_conf;
mem_conf.add_device(0);
// mem_conf.add_device(1);
// std::shared_ptr<DeviceMemPool> mem_pool(new CnMemPool(mem_conf));
// std::shared_ptr<CudaGPU> dev_1(new CudaGPU(0, mem_pool));
//std::shared_ptr<CudaGPU> dev_2(new CudaGPU(1, mem_pool));
Does this code allocate memory between the two GPUs?
I assume that the model configuration should not matter on which platform to be
trained (Cudnn_Conv or SingaConv) these are just identifiers right?
> CPU-GPU parallelism
> --------------------
>
> Key: SINGA-308
> URL: https://issues.apache.org/jira/browse/SINGA-308
> Project: Singa
> Issue Type: Test
> Components: Core, PySINGA
> Environment: Ubuntu 16.04
> CPU-GPU of the same machine
> Reporter: Muhammad Hamdan
> Labels: test
>
> Is it possible to parallelize the alexnet model for the cifar10 example on a
> CPU and GPU instead of 2-GPUs ? Assuming asynchronous communication between
> the two components
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)