> The real answer, as others have pointed out, is to use a profiler,
> which performs timings on the actual code output by the compiler you
> have chosen to use. In the end, the only benchmark that makes any sense
> is running your real application under real conditions.
Thank you and all the others that responded to my
question about "Reduction count as efficiency measure".
I understand and accept the points that have been made
about uselessness of Hugs statistics given as
the reduction count or the timing.
Two of the answers were of particular importance to
me.
Graeme Moss suggested to use Auburn when trying to decide
on what data structure to use for a particular algorithm.
I appreciate this because it seems like a practical
solution -- no matter how convoluted it might look
at the first glance. I have not tried Auburn yet, but
I read Graeme's documentation and it seems that Auburn
could be of some value to me.
Chris Okasaki privately confirmed that my best guess
regarding the choice of standard lists for scalar
products, instead of arrays or random access lists, was
not so foolish after all.
But there were few threads in this discussion that
really worry me.
1. A mysterious profiler, first raised as a question
by myself, and then advised back to me as a solution
to what I want to do. Am I missing a tool that everyone
else uses, or such a tool has not been developed
for Hugs yet?
2. If reduction count in Hugs is a useless statistics,
then what was it designed for?
3. It has been suggested to me by several people that I
should run my benchmarks in my final target environment,
not in Hugs. Frankly, I really do not understand it.
Suddenly, a portability is not important here?
Suppose that I intend to develop a library of modules
that should be reasonably efficient on all Haskell
platforms. Is it too foolish to try to decide on what
data structures would be the most efficient in average
for a particular class of algorithms?
I am not trying to squeeze out every single flop, I am
just trying to avoid bad choices.
4. Putting aside non-portable ByteArrays, one of such
choices is Array vs. List vs. RandomAccessList.
For algorithms based on scalar products my guess
was that List would be the best choice. Chris
confirmed it. Someone else suggested otherwise
pointing out that Arrays are supposedly optimized
specifically for numerical applications. Sorry,
but I do not see why they should fare better
for algorithms that avoid indexing. As I understand,
Haskell Arrays are based on associative lists
and their lookups cannot be possibly faster than well
optimized 'head' and 'tail' operations on standard
lists.
Jan