On 1/2/11 10:39 AM, dsimcha wrote:
On 1/1/2011 6:07 PM, Andrei Alexandrescu wrote:
I think David Simcha's library is close to reviewable form. Code:

http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/std_parallelism.d



Documentation:

http://cis.jhu.edu/~dsimcha/d/phobos/std_parallelism.html

Here are a few comments:

* parallel is templated on range, but not on operation. Does this affect
speed for brief operations (such as the one given in the example,
squares[i] = i * i)? I wonder if using an alias wouldn't be more
appropriate. Some performance numbers would be very useful in any case.

Will benchmark, though the issue here is that I like the foreach syntax
and this can't be done with templates as you describe. Also, IIRC (I'll
do the benchmark at some point to be sure) for such examples memory
bandwidth is limiting, so the delegate call doesn't matter anyhow.

I think anyone who wants to run something in parallel is motivated primarily by increasing efficiency, so focusing on aesthetics would be secondary and unsatisfactory for your target audience.

* Why is ThreadPool a class and what are the prospects of overriding its
members?

It's a class b/c I wanted to have reference semantics and a monitor. I
could make it a final class, as I didn't design for overriding anything.

Final should be it then.

* Can't we define the behavior of break inside a parallel loop?

I left the behavior of this undefined because I didn't have any good
ideas about what it should do. If you can suggest something that would
be useful, I'll define it. I'd rather not define it to do something
completely arbitrary and not obviously useful b/c if we eventually come
up w/ a useful semantic our hands will be tied.

As Sean said - abort current computation I guess.

* I think it does make sense to evaluate a parallel map lazily by using
a finite buffer. Generally map looks the most promising so it may be
worth investing some more work in it to make it "smart lazy".

Can you elaborate on this? I'm not sure what you're suggesting.


* waitStop() -> join()?

I like waitStop() better, though join() is more standard and I've
received this comment from some friends, too. If there's a strong
preference for join(), then I'll change it rather than wasting time on
bikeshed issues.

If they called it "fork-waitStop parallelism" I'd like waitStop better, too. But they call it "fork-join parallelism".

* why runCallable()? There's no runUncallable().

runCallable() is in the weird position of being exposed for the purpose
of clarifying how running delegates and function pointers works, but not
being a part of the public interface that is supposed to be used
directly. I gave it a verbose name for clarity. Are you suggesting I
just call it run?

I'd think so, unless there are reasons not to.

Thanks for working on this!


Andrei

Reply via email to