On Saturday, 14 March 2015 13:06:26 UTC, John wrote: > > My main use cases involve operations that I want to parallelize over the > last dimension of an array. >
I think this is a nice straightforward case, and something that a lot of other vector math libraries do (e.g. I believe that Intel's MKL and Apple's Accelerate library can both use multiple threads, in addition to using SIMD operations). While it would be possible to wrap those libraries (when they're available), it would be nice to be able to do this ourselves. > In C/C++ I'd use #pragma parallel for or a thread pool. > E.g., say I have a function f that operates in-place, like > function f(x,y) > for i=1:length(x) > y[i] = exp(x[i]) > end > end > > > It would be nice to be able to use something like the current sync/async > syntax, but have it dispatch the function calls using threads instead of > processes. > > X = zeros(1000000, 4) > Y = zeros(1000000, 4) > > > pool = ThreadPool(4) > > > @thread_sync pool begin > for p=1:size(X,2) > @thread_async pool begin > f(slice(X,:,p),slice(Y,:,p)) > end > end > end > > > > > > > > > > On Friday, March 13, 2015 at 3:52:37 AM UTC, Viral Shah wrote: >> >> I am looking to put together a set of use cases for our multi-threading >> capabilities - mainly to push forward as well as a showcase. I am thinking >> of starting with stuff in the microbenchmarks and the shootout >> implementations that are already in test/perf. >> >> I am looking for other ideas that would be of interest. If there is real >> interest, we can collect all of these in a repo in JuliaParallel. >> >> -viral >> >> >> >>
