Re: [Haskell-cafe] no-coding functional data structures via lazyness
Dave Bayer wrote: The code is very fast for its size; I haven't seen Haskell code posted on the web that comes close, and it is faster than any of my other tries (I posted this code to http://www.haskell.org/haskellwiki/Prime_numbers). Effectively, it steals a heap data structure out of thin air by exploiting the implementation of lazy evaluation. It would seem that GHC's core data structures are coded closer to the machine that anything I can write _in_ Haskell. So much for studying how to explicitly write a good heap! Indeed, it was irritating that I could not make an explicit efficiently-catenable-list data that was nearly as fast as the dlist technique ([a] - [a]), or even the similarly-performing (forall b. (a - b - b) - b - b) that does not really take advantage of the heavily optimized [] type. Although, I did not try too hard since the implicit version works just fine. Isaac ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no-coding functional data structures via lazyness
On Tuesday 10 July 2007, Dave Bayer wrote: On Jul 9, 2007, at 6:52 PM, Donald Bruce Stewart wrote: bayer: Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. See also http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ dlist-0.3 Thanks; I added a link to the dlist package from my discussion of this idiom on the Wiki page http://www.haskell.org/haskellwiki/Prime_numbers On Jul 9, 2007, at 3:19 PM, Jonathan Cast wrote: I think we usually call it `exploiting laziness'. . . My motivation in asking for a name was to be able to find other Haskell one-liners adequately replacing chapters of data structure books for problems of modest scale, e.g. finding the 5,000,000th prime. So far, I know concatenable lists, and heaps. Is there a Wiki page where someone teaches this principle for a dozen other classic data structures? Your one-liner made me laugh, but it didn't help me in googling, I would have preferred a one-liner teaching me another classic data structure, or an explanation of why burrowing into the GHC implementation gives such a speed advantage over a carefully written explicit data structure. People in other camps don't really get lazy evaluation, even many of our ML neighbors. It would pay to communicate this better to the outside world. Unfortunately, I'm afraid all I can do at this point is wish you luck in your search. Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no-coding functional data structures via lazyness
Jonathan Cast wrote: On Tuesday 10 July 2007, Dave Bayer wrote: On Jul 9, 2007, at 6:52 PM, Donald Bruce Stewart wrote: bayer: Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. See also http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ dlist-0.3 Thanks; I added a link to the dlist package from my discussion of this idiom on the Wiki page http://www.haskell.org/haskellwiki/Prime_numbers On Jul 9, 2007, at 3:19 PM, Jonathan Cast wrote: I think we usually call it `exploiting laziness'. . . My motivation in asking for a name was to be able to find other Haskell one-liners adequately replacing chapters of data structure books for problems of modest scale, e.g. finding the 5,000,000th prime. So far, I know concatenable lists, and heaps. Is there a Wiki page where someone teaches this principle for a dozen other classic data structures? Your one-liner made me laugh, but it didn't help me in googling, I would have preferred a one-liner teaching me another classic data structure, or an explanation of why burrowing into the GHC implementation gives such a speed advantage over a carefully written explicit data structure. People in other camps don't really get lazy evaluation, even many of our ML neighbors. It would pay to communicate this better to the outside world. Unfortunately, I'm afraid all I can do at this point is wish you luck in your search. Maybe it is worth pointing out that the concatenable lists trick can be extended to various other operations on lists. For example, if one just changes a few definitions in the DList-package as follows: newtype DList a = DL { unDL :: forall b. (a - b) - [b] - [b] } fromList = \xs - DL ((++) . (flip List.map xs)) toList = ($[]) . ($ id) . unDL empty= DL (const id) singleton= \x - DL ((++) . (:[]) . ($ x)) cons x xs= DL (\f - (f x:) . unDL xs f) snoc xs x= DL (\f - unDL xs f . (f x:)) append xs ys = DL (\f - unDL xs f . unDL ys f) map f xs = DL (unDL xs . (.f)) one gets concatenable, mappable lists in the sense that for those lists now also map can be done in O(1). (Of course, the actual cost of computing the mapped function on each eventually demanded list element is not saved, but there is no O(spine) cost anymore for distributing the function to each position in the list. Rather, this becomes O(1), just as the cost of append goes down from O(spine of the first list) to O(1). If there are repeated maps, such as in a naive definition of inits, the improvement can be considerable.) Similar tricks can be played with reverse, filter, (...?). Just how, can be seen from: http://wwwtcs.inf.tu-dresden.de/~voigt/p114-voigtlaender.pdf http://wwwtcs.inf.tu-dresden.de/~voigt/icfp2002-slides.pdf http://wwwtcs.inf.tu-dresden.de/~voigt/Vanish.lhs Ciao, Janis. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] no-coding functional data structures via lazyness
Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. This felt like cheating, or at least like using a screwdriver as a crowbar, to be less judgmental. Recently I was playing with prime sieves and various heap data structures, while rereading Chris Okasaki's Purely Functional Data Structures, and it dawned on me: merge xs@(x:xt) ys@(y:yt) = case compare x y of LT - x : (merge xt ys) EQ - x : (merge xt yt) GT - y : (merge xs yt) diff xs@(x:xt) ys@(y:yt) = case compare x y of LT - x : (diff xt ys) EQ - diff xt yt GT - diff xs yt merge' (x:xt) ys = x : (merge xt ys) primes = ps ++ (diff ns $ foldr1 merge' $ map f $ tail primes) where ps = [2,3,5] ns = [7,9..] f p = [ m*p | m - [p,p+2..]] The code is very fast for its size; I haven't seen Haskell code posted on the web that comes close, and it is faster than any of my other tries (I posted this code to http://www.haskell.org/haskellwiki/ Prime_numbers). Effectively, it steals a heap data structure out of thin air by exploiting the implementation of lazy evaluation. It would seem that GHC's core data structures are coded closer to the machine that anything I can write _in_ Haskell. So much for studying how to explicitly write a good heap! So is there a name for this idiom, no-coding a classic data structure through lazy evaluation? Are there other good examples? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no-coding functional data structures via lazyness
On Monday 09 July 2007, Dave Bayer wrote: Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. This felt like cheating, or at least like using a screwdriver as a crowbar, to be less judgmental. Recently I was playing with prime sieves and various heap data structures, while rereading Chris Okasaki's Purely Functional Data Structures, and it dawned on me: merge xs@(x:xt) ys@(y:yt) = case compare x y of LT - x : (merge xt ys) EQ - x : (merge xt yt) GT - y : (merge xs yt) diff xs@(x:xt) ys@(y:yt) = case compare x y of LT - x : (diff xt ys) EQ - diff xt yt GT - diff xs yt merge' (x:xt) ys = x : (merge xt ys) primes = ps ++ (diff ns $ foldr1 merge' $ map f $ tail primes) where ps = [2,3,5] ns = [7,9..] f p = [ m*p | m - [p,p+2..]] The code is very fast for its size; I haven't seen Haskell code posted on the web that comes close, and it is faster than any of my other tries (I posted this code to http://www.haskell.org/haskellwiki/ Prime_numbers). Effectively, it steals a heap data structure out of thin air by exploiting the implementation of lazy evaluation. It would seem that GHC's core data structures are coded closer to the machine that anything I can write _in_ Haskell. So much for studying how to explicitly write a good heap! So is there a name for this idiom, no-coding a classic data structure through lazy evaluation? Are there other good examples? I think we usually call it `exploiting laziness'. . . Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no-coding functional data structures via lazyness
bayer: Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. This felt like cheating, or at least like using a screwdriver as a crowbar, to be less judgmental. See also http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist-0.3 -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] no-coding functional data structures via lazyness
On Jul 9, 2007, at 6:52 PM, Donald Bruce Stewart wrote: bayer: Learning Haskell, the Prelude.ShowS type stood out as odd, exploiting the implementation of lazy evaluation to avoid explicitly writing an efficient concatenable list data structure. See also http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ dlist-0.3 Thanks; I added a link to the dlist package from my discussion of this idiom on the Wiki page http://www.haskell.org/haskellwiki/Prime_numbers On Jul 9, 2007, at 3:19 PM, Jonathan Cast wrote: I think we usually call it `exploiting laziness'. . . My motivation in asking for a name was to be able to find other Haskell one-liners adequately replacing chapters of data structure books for problems of modest scale, e.g. finding the 5,000,000th prime. So far, I know concatenable lists, and heaps. Is there a Wiki page where someone teaches this principle for a dozen other classic data structures? Your one-liner made me laugh, but it didn't help me in googling, I would have preferred a one-liner teaching me another classic data structure, or an explanation of why burrowing into the GHC implementation gives such a speed advantage over a carefully written explicit data structure. People in other camps don't really get lazy evaluation, even many of our ML neighbors. It would pay to communicate this better to the outside world. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe