On Fri, 16 Oct 1998, Dave Tweed wrote:
> On Fri, 16 Oct 1998, Simon Peyton-Jones wrote:
>
> > Another approach is to compete not head-to-head on speed, but on
> > cunning. Get a good library of numeric procedures (e.g. Mathlab),
> > interface them to Haskell, and use Haskell as the glue code to make
> > it really fast to write complex numerical algorithms. 99% of the
> > time will still be spent in the library, so the speed of the Haskell
> > implementation is not very important. This looks like a jolly productive
> > line to me.
It is, but it no so simple either. I do remember some
fine points we had to consider before comitting ourselves
to a certain approach when interfacing Eiffel to NAG library
(Numerical Algorithm Group). I think it is worth to cite
some of those points here to underline a complexity of
a gluing process for anything larger than a simple
one-two-three program. After all there is a world of
difference between a program and a library - with many
potential users: cursing or loving you later. :-)
1. Choice of general interfacing mechanism, which corresponds to
making similar choice in Haskell: Green Card, Hsakell-direct?
We used Cecil.
2. Error handling. Nag used some global error structure,
similarly as X Window does. Back then we did not worry
much about multithreading, but still there were some
inconsistencies between object-oriented and classical
approaches.
3. Accuracy issues. Machine dependencies and such.
Double? Double-Double? Interval arithmetic?
4. Breaking functions with many-many arguements into
smaller and more digestable pieces, with some default
setups. In object oriented approach this is quite simple,
because you can use local object variables for this.
Not so for functional programming, especially Haskell - unless
you do not care much about user-friendliness.
5. Side effects. Nag, as many other C-based programs,
mixes concepts of functions with those of procedures.
In Eiffel one can also do it, but is considered inelegant
and unsafe, so one strives for constistency breaking
a mixed beast into two steps: a procedure, followed
by a function.
6. The most painful stuff was related to pointers to
functions-to_functions-to_functions. But here Haskell would
shine, of course!
7. Precondition, postconditions, invariants. Here Eiffel really
shines, and this part was most awarding for all of us
and for users as well, I hope.
Most of those things became almost automatically treated
after we gained some experience and set up some rules.
Yet, there was a lot of initial sweating!
>
> I guess that this is one of those points where Jan mentioned there's a
> line between scientific & engineering computing. From (my admittedly very
> limited) experience, there's a lot of scientific computing which boils
> down to the variations on the same basic problem formulation, e.g., some
> species of system of differential equations, finding eigenvalues, etc.
> This area would be well served by being able to link to Mathlab, LINPACK,
> etc, but the amount & complexity of the gluing to be done would be
> relatively small so that working with C or Fortran wouldn't really bite
> you. So I suspect that, if you're talking about `max gain for least pain'
> in Real World applications, it's low pain but also low gain.
I guess you are right here - subject to my previous
qualification what the 'least pain' means. :-)
>
> But there's a lot of problems, probably more in the hazy region between
> science & engineering, where `numerically intensive' algorithms are
> developed which don't look anything like existing classical techniques.
> Here the issue is to generate CORRECT results REASONABLY QUICKLY, ie, the
> time has to be within a factor of 3-4 times of a C implementation but this
> slowdown is acceptable if you are more confident your infant algorithm is
> correctly implemented in your infant code.
I agree whole-heartedly here!
> As an example, I've got several
> variations on the standard idea of a Markov Random Field being used in my
> current work which require heavy modifications within the various solution
> schemes for MRF problems so that I can't interface to an existing MRF
> solver (although I can use ideas from looking at their source code). I've
> spent many hours searching for errors occasioned by the low-level nature
> of C; I'd have loved it if I could have saved myself time OVERALL by
> using a functional language.
>
> So in my opinion, although it's much more work, you'd get much more `Real
> World' usage by concentrating on rapid development scientific/engineering
> computing.
I wonder what others think about it. I would love to see
this line of approach accepted, refined and implemented but
yet I do not feel qualified enough to advocating it myself.
Jan