Leon Smith wrote:
Heinrich Apfelmus wrote:
which were introduced by John Hughes in his Phd thesis from 1983. They
are intriguing! Unfortunately, I haven't been able to procure a copy of
Hughes' thesis, either electronic or in paper. :( Can anyone help? Are
there any other resources about this parallel approach?
Aye, returning lazy pairs is one of those things that seems rather
magical in several respects. Out of curiousity, have you looked at
the unsafePerformIO thought experiment near the end of my Monad Reader
article? It demonstrates that returning lazy pairs can introduce
multiple code paths through a single function, reminiscent of (but
different than) multi-mode logical relations. (Mercury, for example,
optimizes relations differently depending on their mode.)
Yes, the multiple code paths phenomenon is pretty much the essence of
lazy evaluation, i.e. only one part of the whole expression is evaluated
and it depends on the demand pattern which part that is.
Thanks to the kind souls on haskell-cafe, I finally understand the
SYNCH primitive. The idea is that in
let (y,z) = SYNCH x in ...
the variables y and z are bound to x , but the value of x is only
evaluated when *both* y and z are demanded. If you demand only y, then
evaluation will stall. Clearly, this is useless without some form of
parallelism that makes it possible to demand both at the same time.
The SYNCHLIST function is a variation on that; it also guarantees that
the list elements are consumed in lock-step.
SYNCHLIST xs = (d1 `seq` x1, d2 `seq` x2)
where
(d1,d2) = SYNCH xs
(t1,t2) = SYNCH (tail xs)
(x1,x2) = case xs of
[] - ([],[])
(x:xs) - (x:t1,x:t2)
In other words, SYNCH binds a values to two different variables without
actually sharing it.
Regards,
Heinrich Apfelmus
--
http://apfelmus.nfshost.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe