On 4/8/12 9:25 PM, Mic wrote:
Hi,
Does spawn spread the task across computer nodes in a cluster eg like in Julia http://julialang.org/manual/parallel-computing/ ?

Currently, no.

Any plans maybe also to build spawn on top of MapReduce and HDFS API for Hadoop like in Python http://sourceforge.net/apps/mediawiki/pydoop/index.php?title=Main_Paget ?

Currently, no.

We are currently not targeting distributed computing. Some of the features in Rust—e.g., unique pointer transfer between tasks—are really intended for processes with shared memory.

Nonetheless, the current design would permit a distributed implementation: all sendable things are also copyable (and tree-shaped, for that matter), which means that they could in theory be efficiently serialized and sent over the wire.

However, as we evolve, there are some planned features that do not lend themselves so well to a distributed setting. For example, we would like to make use of regions to allow the construction of a message that has arbitrary shape (for example, a graph) and which can then be sent as a whole. While it is of course possible to serialize graphs, it's just harder and slower. But I guess that so long as we stick to a strict "no shared memory" model (which I think we will) then a distributed implementation is always a possibility.

(Data or small-scale task-parallelism, as discussed in the recent thread on ray tracing, is a different matter of course. That often only makes sense with shared memory. But we don't currently have any features targeting this.)


Niko
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to