On Thursday, 14 May 2015 at 20:56:16 UTC, Ola Fosheim Grøstad wrote:
On Thursday, 14 May 2015 at 20:28:20 UTC, Laeeth Isharc wrote:
My own is a pragmatic commercial one. I have some problems which perhaps scale quite well, and rather than write it using fork directly, I would rather have a higher level wrapper along the lines of std.parallelism.

Languages like Chapel and extended versions of C++ have built in support for parallel computing that is relatively effortless and designed by experts (Cray/IBM etc) to cover common patterns in demanding batch processing for those who wants something higher level than plain C++ (or in this case D which is pretty much the same thing).

Yes - I am sure that there is excellent stuff here, from which one may learn much: especially if approaching it from a more theoretical or enterprisey industrial scale perspective.

However, you could consider combining single threaded processes in D with e.g. Python as a supervising process if the datasets allow it. You'll find lots of literature on Inter Process Communication (IPC) for Unix. Performance will be lower, but your own productivity might be higher, YMMV.

But why would one use python when fork itself isn't hard to use in a narrow sense, and neither is the kind of interprocess communication I would like to do for the kind of tasks I have in mind. It just seems to make sense to have a light wrapper. Just because some problems in parallel processing are hard doesn't seem to me a reason not to do some work on addressing the easier ones that may in a practical sense have great value in having an imperfect (but real) solution for. Sometimes I have the sense when talking with you that the answer to any question is anything but D! ;) (But I am sure I must be mistaken!)

Perhaps such would be flawed and limited, but often something is better than nothing, even if not perfect. And I mention it on the forum only because usually I have found the problems I face turn out to be those faced by many others too..

You need momentum in order to get from a raw state to something polished, so you essentially need a larger community that both have experience with the topic and a need for it in order to get a sensible framework that is maintained.

True. But we are not speaking of getting from a raw state to perfection but just starting to play with the problem. If Walter Bright had listened to well-intentioned advice, he wouldn't be in the compiler business, let alone have given us what became D. I am no Walter Bright, but this is an easier problem to start exploring, and this would be beyond the scope of anything I would do just by myself.

If you can get away with it, the most common simplistic approach seems to be map-reduce. Because it is easy to distribute over many machines and there are frameworks that do the tedious bits for you.

Yes, indeed. But my question was more about the distinctions between processes and threads and the non-obvious implications for the design of such a framework.

Nice chatting.



Laeeth.

Reply via email to