LiKai Liu wrote:
> Is it possible to use the "future" in Alice ML to speed up computation on
> an SMP or multicore machine? So far my little experiment below does not
> seem to use the CPU above 100% (as indicated in "top").
No, unfortunately, the SEAM VM underlying the Alice system does not
currently support multiple native threads. We cannot say if and when
this will change.
However, you can employ Alice ML's distribution layer to spawn several
Alice processes and have them communicate via proxies:
See the worker example at the bottom of that page for the basic setup.
To (potentially) utilize the 8 cores of your machine, it should suffice
to simply define hosts as
val hosts = List.tabulate (8, fn _ => "localhost")
Obviously, this approach does not make much sense in your specific
example, because starting another Alice VM is much too expensive here
(in fact, even multi-threading is not useful for such heavy recursion).
But just to demonstrate the pattern, here is how you could express it
(caveat: didn't have time to test it):
import structure Remote from "x-alice:/lib/distribution/Remote"
import signature REMOTE from "x-alice:/lib/distribution/REMOTE-sig"
fun fib' myrun n if (n <= 1) = n
| fib' myrun n =
fun dist n' =
import structure Remote : REMOTE from
val it : int
val it = fib' Remote.run n'
structure X = spawn unpack myrun ("localhost", dist (n - 1)) :
(val it : int)
structure Y = spawn unpack myrun ("localhost", dist (n - 2)) :
(val it : int)
X.it + Y.it
val fib = fib' Remote.run
What this does is the following: in every recursion it dynamically
creates two components computing the respective fibs (note that the
component closes over the local argument n'), that are then run in
different processes on the same host. A complication is that the
Remote.run function is sited, hence it has to be imported locally in
every process. Factoring it out as an additional parameter (myrun) makes
the pattern a bit more elegant.
Hope this helps,
alice-users mailing list