Ryan Dickie wrote:
One thing I've noticed is that turning on optimizations significantly
increases the speed of haskell code. Are you comparing code between
languages with -O2 or without opts?
I had done no optimization, but neither -O nor -O2 make a significant
difference in either the C or
For the purposes of learning, I am trying to optimize some variation of
the following code for computing all perfect numbers less than 1.
divisors i = [j | j-[1..i-1], i `mod` j == 0]
main = print [i | i-[1..1], i == sum (divisors i)]
I know this is mathematically stupid, but the point
Jaak Randmets wrote:
On 10/28/07, Prabhakar Ragde [EMAIL PROTECTED] wrote:
For the purposes of learning, I am trying to optimize some variation of
the following code for computing all perfect numbers less than 1.
divisors i = [j | j-[1..i-1], i `mod` j == 0]
main = print [i | i-[1..1
apfelmus wrote:
I mean, contemplate this trivial exercise for a moment: write a program
that reads from stdin a series of numbers (one number per line), and
writes out the sum of the last n numbers. This is a trivial problem,
and I have no doubt that someone who knows Haskell better than
ok wrote:
If one wants a lazy dynamically typed programming language that
lets you construct infinite lists by using the basic language
mechanisms in a simple and direct way, there's always Recanati's
Lambdix, which is a lazy Lisp. I don't know whether that ever saw
serious use, but it does