On Fri, 22 Jan 1999, David Barton wrote:
> Peter M|ller Neergaard writes:
>
> 1) The implementation of list concatenation ++. In the Haskell
> report it is stated that ++ in general is an operator on monads.
> In the case of lists, ++ works as list concatenation. However,
> I cannot find any place describing whether the implementation of
> ++ is supposed to be better than the straight-forward:
>
> [] ++ ys = ys
> (x:xs) ++ ys = x : (xs ++ ys)
>
>
> See Okasaki, "Purely Functional Data Structures" for a discussion on
> catenable list and queues. But keep in mind that you may not need it;
> while the running time of this algorithm is O(n), in most contexts you
> will be accessing the resulting list from the head anyway. This means
> that the cost of the concatenation can be amortized over the list
> access, leading to amortized O(1) running time. Even if you don't
> access the entire list, lazy evaluation means the unaccessed part of
> the list won't be evaluated. Again, Okasaki gives a good summary of
> amortized cost in the presence of lazy evaluation, and of methods of
> proving amortized cost bounds.
With regards to Peter's original motivation for the question though, from
what I remember about the complexity argument from Bird, de Moor and Jones
you don't need anything more efficient than the above definition in order
get the required linear complexity (and the additional criterion that the
algorithm is `on-line'). Pursuing more efficient implementations too far
might well be a red herring. (I might be wrong about this of course.)
___cheers,_dave_________________________________________________________
email: [EMAIL PROTECTED] "I'm guilty of a lot of things but I've
www.cs.bris.ac.uk/~tweed/pi.htm never violated the law of conservation
work tel: (0117) 954-5253 of energy!" -- Bart Kosko, Nanotime