Bart Demoen wrote:
Simon wrote:
CSE can also make space consumption asymptotically worse.
I can understand this when garbage collection is around and because of
CSE, some data structures don't get cleaned up; but is it also true
without gc ?
If you don't use a rule like
I'm still curious about my first question, though, about the specific
optimizations included in ghc and hbc. If in fact they don't do CSE,
are there optimizations which they do perform which would change the
asymptotic running time?
GHC doesn't do CSE, and this is part of the reason...
Keith Wansbrough [EMAIL PROTECTED] writes:
So while Hugs gives you a reduction count (or even a millisecond
duration), this is essentially meaningless: in a real application
you would compile the code with an optimising compiler. The effect
this can have on your execution time can easily be
On 24-Nov-1998, Jan Skibinski [EMAIL PROTECTED] wrote:
On Tue, 24 Nov 1998, Lennart Augustsson wrote:
I'm not sure what you're trying to accomplish. If you want to
decide from a theoretical point of view which is the better
algorithm then you should do something more abstract.
If
| Is this true in practice? That is, are there programs which have
| different asymptotic running times when compiled under ghc or hbc than
| when running under Hugs?
|
| It would actually surprise me if there were; I'm having a hard time
| imagining a realistic optimization that would do this.
Ralf Hinze [EMAIL PROTECTED] writes:
| Is this true in practice? That is, are there programs which have
| different asymptotic running times when compiled under ghc or hbc than
| when running under Hugs?
|
| It would actually surprise me if there were; I'm having a hard time
| imagining
At 05:12 -0600 1998/11/24, Jan Skibinski wrote:
To paraphrase George Orwell, "All reductions are equal, but some are
more equal than others."
...
So are assembly language instructions. Yet, I could think about
some average instruction size for similarly written programs.
Naive as my question
If neither the reduction count nor the timing are appriopriate
measures of efficiency in Hugs, then what is? Is there any
profiling tool available for the interpreter?
Since modern CPU's are developed as to make more commonly used assembler
instructions faster, the only way to find out
At 09:49 -0600 1998/11/24, Jan Skibinski wrote:
Since modern CPU's are developed as to make more commonly used assembler
instructions faster, the only way to find out the speed of the components
of a program is to use a profiler.
Looks like you missed my last question:
"Is there
...(continued)...
Sorry, I forgot to include my references:
[1]
@InProceedings{oka95b,
author = "Chris Okasaki",
title ="Purely functional random-access lists",
pages ="86--95",
booktitle ="Conference Record of FPCA '95",
year = 1995,
publisher =
Hi Mark,
To paraphrase George Orwell, "All reductions are equal, but some are
more equal than others."
:-)
So are assembly language instructions. Yet, I could think about
some average instruction size for similarly written programs.
Naive as my question might have been, I asked it anyway
in
which of those data structures would give me the best
response time?
There is no simple answer to that question. It depends on how you
use it and what implementation you're going to use.
Set up a typical usage scenario and test it on the platform
you are going to use it on, that's
So are assembly language instructions. Yet, I could think about
some average instruction size for similarly written programs.
Do you mean `time' rather than `size'?
If you do, then you can get rather wrong results when considering
assembly language since the concept of timing an individual
On Tue, 24 Nov 1998, Lennart Augustsson wrote:
So are assembly language instructions. Yet, I could think about
some average instruction size for similarly written programs.
Do you mean `time' rather than `size'?
Sorry, I meant 'time'.
..
I'm not sure what you're trying to
May I at least assume the implication:
reduction_count1 reduction_count2 == time1 time2 ?
I'm not sure how the reduction count is done in Hugs,
but if it simply counts G-machine instructions then
you can't assume even this. It's likely to be true,
but G-machine instructions can
Hi Jan,
| Could anyone explain what is a relation between
| number of reductions and execution time of any
| given Haskell function? Is there any linear
| relationship between the two on any given machine
| and compiler/interpreter?
Absolutely, definitely not. To
16 matches
Mail list logo