John Meacham :

> sorry if this is a silly question but....
 
> so I want to create a proper set of linear algebra tools to write a 
> ray-tracer in haskell, this includes a few types, a Point and a Vector 
> both of which can be represented by 3 Floats, I would like the following
> operations..
> Vector+Vector = Vector
> Point + Vector = Point
> Point + Point = [Invalid should be compile time error]
> Point - Point = Vector
> Vector - Vector = Vector
> Vector * Float = Vector
> Point * Float = [Error!]

...

> my question is... is there any way to do this using +,-,* operators? or some
> other way without creating a new operator for each valid combination?

...
> also on a more general question... how much slower can i expect a say 
> raytracer to be in haskell than C? I am experementing with new algorithms 
> so Haskell seems very attractive in that reguard, however speed is very
> important to as this is the type of application that would run for several
> hours in optimized C... how much of of hit can i expect? is it always a
> constant overhead or are there cases where the Haskell runtime can cause
> worse overhead? thanks..


+++++++++++++++++++++
This is FAR from being a silly question.
The overloading of "standard" arithmetic operations is relatively easy,
but its proper structuring is clumsy. Here is my solution (in Hugs).

1. I began by screening almost everything numeric from the Prelude. All
   the operators, class Num and its subclasses, etc. (There are some
   problems with that, the default mechanism breaks down, but this is
   not a mortal disease).

2. I introduced (with several variants) such classes as AddGroup - with
   addition and subtraction;  Monoid - with multiplication, Module - 
   with an "external" multiplication (*>), for example a scalar by a
vector,
   etc. Please don't try to have the "normal" multiplication between
   scalars and vectors, this is anyway methodologically not very sane.

3. For some geometric manipulations I had Vectors and Points as well.
These
   types, together with Ints, Floats, PolyVectors, Power Series, 
   lazy differential towers, DifferentialForms, Quaternions and all
   this horrible stuff you can imagine if you wish, have been declared
as
   instances of these new classes. 
   Yes, the answer to the question "why not use the standard Num" is:
   "why not?" In principle, in many circumstances it is possible, but
   awkward. Philosophically it annoys me. Perhaps quaternions are
   Numbers, but Power Series?

4. Points and Vectors belong to the same type, but with different tags.
   The illegality of P+P is discovered at run time, but do you really
   need the compile-time diagnosis here? Why? The P-P=V problem 
   raised by John Peterson disappears.

(BTW your: newtype Vector = Vector (Float,Float,Float) etc. is in
contradiction with your statement concerning the usage of homogeneous
coordinates. Those are very elegant, but relatively seldom used in
RTracers; they are more useful if you have to deal often with
homothetic projections. The statement P*Fl [I would write Fl*>P]
= Error is not necessarily the best convention. This is the homothetic
scaling which may be represented in homogeneous coordinates by the
*division* of the fourth coordinate by the scaling factor. You might
need it.)

++

I am afraid that the performance of a ray tracer in current Haskell
might
be horrible. First of all: the main loop of the tracer is the scanning
of your viewport rectangle and updating it. The array processing
mechanisms
must be quite efficiently implemented.

The main computing task within a complex scene is the scanning of the
object database, and finding the intersections of the ray with the
surface (and then, computing the normal, throwing the ray at the light
source, etc.) This database should be hierarchized with bounding boxes,
BSP partitions, etc. Real RTracers (see POVray, OORT, etc.) use very
intensely random-access data structures, and memoizing (if the
"eligible"
intersection point has been found, for the next pass we begin with the
*same* object and its neighbours). The data structures are usually
static,
the intermediate composite variables are heavily reused. With several
hundred thousands intersections, the GC may have to work hard in
Haskell...

In this particular context I don't see any advantage of laziness despite
my true and profound love for everything that is non-strict.

The idea is challenging and very nice. I have tried it, but everything
I got was a very slow toy system, and I am too old for toys. But Haskell
is progressing, and I think that such projects *SHOULD* be realized even
if not very practical, in order to analyze the behaviour of the Haskell
code in real situations.


Jerzy Karczmarczuk
Caen, France.


Reply via email to