Thanks for your information. I guess I need to wait until this pull request 
is merged to the main branch.

Meanwhile, I think the way to overcome is to set the number of threads in 
BLAS equal to number of CPU cores. For example,

add_procs(4)
BLAS.set_num_threads(Sys.CPU_CORES)

Then, it uses all cores to compute.

Thanks again.


On Saturday, August 13, 2016 at 1:44:18 AM UTC+8, Kristoffer Carlsson wrote:
>
> And https://github.com/JuliaLang/julia/issues/16729
>
> On Friday, August 12, 2016 at 7:43:56 PM UTC+2, Kristoffer Carlsson wrote:
>>
>> Ref https://github.com/JuliaLang/julia/pull/17429
>>
>> On Friday, August 12, 2016 at 7:39:40 PM UTC+2, Doan Thanh Nam wrote:
>>>
>>> Hi,
>>>
>>> I am new to Julia Language and I am curious to use it for my work. 
>>> Recently, I have tried to use it for some of my projects and observed some 
>>> interesting cases.
>>>
>>> First of all, when I start Julia REPL without adding any worker 
>>> processes and do matrix multiplication. Julia takes all CPU cores in my 
>>> computer to speed up the computation. I guess it used OpenBLAS. The code is
>>> X = randn(5000, 5000); Y = randn(5000, 5000); X * Y;
>>>
>>>
>>> However, after adding some worker processes by using *addprocs*(4) and 
>>> do matrix multiplication. It runs the computation on only 1 CPU core and it 
>>> slows down my performance. Even if I remove all worker processes that I add 
>>> and run the multiplication again, it still uses one CPU core to do. The 
>>> code is
>>> addprocs(4); X = randn(5000, 5000); Y = randn(5000, 5000); X * Y;
>>> rmprocs(2); rmprocs(3); rmprocs(4);rmprocs(5);X = randn(5000, 5000); Y = 
>>> randn(5000, 5000); X * Y;
>>>
>>>
>>> My question here is that: Are there any ways to use all cores to do 
>>> matrix multiplication ( and other matrix methods) with multiple cores after 
>>> *addprocs*()?
>>>
>>> Thanks.
>>>
>>

Reply via email to