Luke Palmer wrote:
The example being discussed in this thread is a good one:

  sum [1..10^8] + length [1..10^8]

With Haskell's semantics, we know we can write this as:

  let xs = [1..10^8] in sum xs + length xs

But it causes a change in memory *complexity*, so in some sense these
two sentences are not equal.  What is the theory in which we can
observe this fact?  Is it possible to give a simple denotational
semantics which captures it?

There's actually been a good deal of work on trying to mathematize this sort of issue, under the name of linear logic[1]. The problem with classical equational reasoning is that it doesn't capture the notion of "resources" or the "sharing" thereof. The two expressions are not the same because the first has two [1..10^8] resources it can use up, whereas the later only has one. Depending on the balance between sharing temporal work vs minimizing memory overhead, sometimes it's okay to add sharing and other times it's okay to remove it, but in general both options are not available at once.

It's pretty easy to capture issues of economy with LL, though making a rewriting system takes a bit more framework. I'm not sure to what extent LL has been explored as a semantic model for programs, but Clean[2] and Phil Wadler[3] have certainly done very similar work.


[1] <http://en.wikipedia.org/wiki/Linear_logic>
[2] <http://www.st.cs.ru.nl/Onderzoek/Publicaties/publicaties.html>
[3] <http://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.html>

--
Live well,
~wren
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to