On 10/08/2011 12:46 PM, Jan-Willem Maessen wrote:
On Fri, Oct 7, 2011 at 2:46 PM, Brandon Moore<brandon_m_mo...@yahoo.com>  wrote:
Margnus Carlsson did something monadic several years ago.

http://dl.acm.org/citation.cfm?doid=581478.581482

Perhaps there is an implementation on Hackage or on his website.

This stuff also goes by the moniker "adaptive computation". See the
references and citations of that paper for more on this.
Umut Acar now seems to refer to this as "self-adjusting computation",
and has some work here:

http://umut.mpi-sws.org/self-adjusting-computation

In particular, there seems to be a modified version of Mlton.
To tie things together a bit, Magnus Carlsson's paper was based on
Umut Acar's earlier work.  Note in particular that there's a lot of
emphasis placed on efficiently figuring out what computation needs to
be re-done (and some theory to support those efficiency claims).  FRP
frameworks, etc. naively re-do rather too much computation (all of it,
in particularly poor cases) compared to systems specifically tailored
to self-adjustment.

-Jan-Willem Maessen

1. Thank you!  This is the kind of thing I was looking for.  It
(a) uses compiler/interpreter infrastructure (not explicit programmer directives) to (b) construct a dynamic dependency graph that reflects the structure of the computation. I am curious if anyone has constructed a modified STG machine (which doesn't modify running code, I guess?) or alternatively a graph-based machine like Sestoft's mark 1 machine (that actually modifies running code) that would track dependencies. That would be call-by-need instead of call-by-value.

(Just for clarification, I am interested in calculations where the individual operations (e.g. analogous to '+') are extremely slow and are currently written in C++. Therefore, a certain amount of overhead seems tolerable. )

2. Another question would be: most of the "self-adjusting computation" frameworks seem to assume that we always throw away the old computation. For function optimization (or, alternatively, Markov chain Monte Carlo random walks) we keep either the old or the new computation based on the result of the new computation. Thus, we would not like to simply over-write the old computation when we do the new computation. This could mean splitting (part of) the dynamic dependency graph, which might incur a lot of memory allocation overhead...

-BenRI

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to