I'm probably being dense here and not seeing the wood for the trees,
and I've included lots of tedious junk below, but here goes.

I have a program which runs too slowly.  I knew when I wrote it that I
was taking a `naive' approach to the program - I've tried to write
what I meant rather than what I thought might be fast; if it turned
out to run too slowly I was planning to optimise it then.  So here I
am :-).

I've been using GHC's profiling support.  This gives me information
about numbers of `allocations' (which I presume refers to memory
allocations) and does indeed point the finger at the part of the
program that I most suspected anyway.

I think the program is using too much CPU rather than too much memory
because (a) my machine doesn't thrash and (b) the test case I'm using
is not very large.

However, my GHC installation is a bit `dodgy' and a bit stale too, and
the profiled executables suffer from (a) lack of any CPU time
indication in the profiling reports and (b) occasionally coredumping
unexpectedly.  Rest assured that I plan to update my GHC installation
and recheck all my settings, and will report the problem to the GHC
people if it still doesn't work properly - I'm just giving this info
as background detail, not as a problem report.

I have a number of not really very closely related questions:

* I find it difficult to understand how the code I write translates
into actual algorithms, memory management, etc.  Haskell is such a
nice language that it hides all this detail from me :-).  So, I'd be
grateful for a reference or two on this area.

* I added a reasonable amount of added strictness in `obvious' kind of
places, but with no visible effect on performance.  Does adding
strictness help with CPU use as well as with memory use ?  Where is it
most beneficial to add strictness ?

* Most of my program is in a state-threading monad of my own (built
out of normal functional pieces).  The main program (which is in the
IO monad) invokes the `grind handle on state machine' function,
passing the action to perform and the initial state and getting the
new state back.  Is this good, bad, or ugly ?  It does make it easy to
decouple the nasty I/O handling event loop part of the program from
the clean, supposedly predictable, state engine.  (Yes, the program
does really need to be a state engine.)

* In particular, the bit of code that seems to be slow is the part
responsible for updating a part of the state whose type is broadly
speaking

   Array (x,y) [value]

I need to do similar cumulative updates to the values in a whole
region of the array, and I do it with (basically) this

 -- This applies an update to all cells within range of the region:

 regionallyUpdateArenaArray ::
     Region -> Coord -> (Distance -> v -> v) -> ArenaArray v
     -> ArenaArray v

 regionallyUpdateArenaArray
             region@((xmin, ymin), (xmax,ymax))
             range uf old
     = accum (flip uf) old loc_dists
       where loc_dists = [ (loc, dist) |
                           x <- [ max 0 (xmin-range) .. min (xsz-1) (xmax+range) ],
                           y <- [ max 0 (ymin-range) .. min (ysz-1) (ymax+range) ],
                           let loc = (x,y),
                           let dist = region |-| locationToRegion loc,
                           dist <= maxdist
                         ]
             maxdist = toDistance range

 The |-| operator calculates the distance (actually, the squared
 distance embedded in a newtype) between two regions.

In C I would have just had one big flat array and updated it in
place.  How can I get a Haskell compiler to do the same ?

* I really wanted a simple, flat, strict type rather than the lazy,
nested [value].  I'm only using a list because I want to do a zipWith
on it in the update function.  Also, the manual says that an Array is
not strict in its contents, which doesn't seem helpful to me.

Ian.

Reply via email to