I have a number of compute-bound graphics programs written in Haskell. (Fractal generators, ray tracers, that kind of thing.) GHC offers several concurrency and parallelism abstractions, but what's the best way to use these to get images rendered as fast as possible, using the available compute power?

(OK, well the *best* way is to use the GPU. But AFAIK that's still a theoretical research project, so we'll leave that for now.)

I've identified a couple of common cases. You have a 2D grid of points, and you want to compute the value at each point. Eventually you will have a grid of /pixels/ where each value is a /colour/, but there may be intermediate steps before that. So, what cases exist?

1. A point's value is a function of its coordinates.

2. A point's value is a function of its previous value from the last frame.

3. A point's value is a function of /several/ points from the last frame.

How can we accelerate this? I see a few options:

- Create a spark for every point in the grid.
- Create several explicit threads to populate non-overlapping regions of the grid.
- Use parallel arrays. (Does this actually works yet??)

I'm presuming that sparking every individual point is going to create billions of absolutely tiny sparks, which probably won't give great performance. We could spark every line rather than every point?

Using explicit threads has the nice side-effect that we can produce progress information. Few things are more frustrating than staring at a blank screen with no idea how long it's going to take. I'm thinking this method might also allow you to avoid two cores tripping over each other's caches.

And then there's parallel arrays, which presumably are designed from the ground up for exactly this type of task. But are they usable yet?

Any further options?

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to