Florian Hars <[EMAIL PROTECTED]> wrote,

> To cite the comments of Olof Torgersson on Haskell
> (www.cs.chalmers.se/pub/users/oloft/Papers/wm96/wm96.html):
> 
>    As a consequence the programmer loses control of what is really
>    going on and may need special tools like heap-profilers to find out
>    why a program consumes memory in an unexpected way. To fix the
>    behavior of programs the programmer may be forced to rewrite the
>    declarative description of the problem in some way better suited to
>    the particular underlying implementation. Thus, an important
>    feature of declarative programming may be lost -- the programmer
>    does not only have to be concerned with how a program is executed
>    but has to understand a model that is difficult to understand and
>    very different from the intuitive understanding of the program.

What's the problem with using a heap profiler?  Sure, heap
usage can get out of hand when writing a straight forward
maximally declarative (whatever that means) version of a
program.  Is this a problem?  No.

The trick about a language like Haskell is that you get to a
working prototype of your program extremely quickly.  Then,
you can improve performance and, eg, use a heap profiler to
find the weak spots and rewrite them.  And yes, then, I
usually lose the pure high-level view of my program in
theses places in the code.

This is still a *lot* better then not having the high-level
view at all and taking much longer to get the first working
version of my program (even if it is already more efficient
at that point).  I don't know about you, but I rather have
90% of my program in a nice declarative style and rewrite
10% later to make it efficient, then having a mess
throughout 100% of the program.

Cheers,
Manuel

Reply via email to