On Sunday, 18 February 2018 at 17:54:58 UTC, SrMordred wrote:
I´m experimenting with threads and related recently.
(i´m just started so may be some terrrible mistakes here)
With this base work:
foreach(i ; 0 .. SIZE)
{
results[i] = values1[i] * values2[i];
}
and then with this 3 others methods: parallel, spawn and
Threads.
this was my results:
_base : 456 ms and 479 us
_parallel : 331 ms, 324 us, and 4 hnsecs
_concurrency : 367 ms, 348 us, and 2 hnsecs
_thread : 369 ms, 565 us, and 3 hnsecs
(code here : https://run.dlang.io/is/2pdmmk )
All methods have minor speedup gains. I was expecting a lot
more.
Since I have 7 cores I expected like below 100ms.
The operation is trivial and dataset is rather small. In such
cases SIMD with eg array ops is the way to go:
result[] = values[] * values2[];
Parallelism is gets more interesting with more expensive
operations. You may also try bigger sizes or both.
I´m not seeing false sharing in this case. or i'm wrong?
If someone can expand on this, i'll be grateful.
Thanks!