Re: [Haskell-cafe] Boxed Mutable Arrays

2009-12-16 Thread Richard Kelsall

Brad Larsen wrote:

I have considered using Data.IntMap to implement a sort of faux hash
table, e.g., introduce a Hashable class, and then use an IntMap to
lists of Hashable.  The result would be a pure, persistent ``hash
table''.  In such a case, however, lookup/insert/delete operations
would have average complexity logarithmic in the number of buckets.



I'll throw in an idea that has been nagging me about hash tables.
You could have a list of hash tables of increasing size. On an insert
collision in a smaller table all the entries get put into the next one
up. For a lookup you would have to check all the tables until you find
the hash.

e.g.

With three hash table in the list of these example sizes.
Table size 2^2 = 2 probable entries before collision.
   2^4 = 4
   2^6 = 8

h..h
...h.h.h...h
h..h...hh...h..h.h..h...


I expect if it's any good it has already been done.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Leaner Haskell.org frontpage

2009-07-10 Thread Richard Kelsall

Jeff Wheeler wrote:
...

Search that follows it is awkward. There are three large search
choices for beginners: 1) the search at the top, which confusingly has
two submit buttons (with ambiguous differences to a beginner); 2) the
Search link near the top of the navigation (which links to an almost
empty page that might as well be included at the link's location); and
3) the Search link underneath the About header, which doesn't seem to
belong at all.


I agree the search aspect could be improved: the MediaWiki software/
database indexing need an upgrade, the Google 'custom search' Haskell
logos are out-of-date and having two Google search links seems
excessive.

Repeating my old search for 'mdo' which started me on this thread
http://www.mail-archive.com/haskell-cafe@haskell.org/msg7.html
the box at the top still returns no results, but it is better because
it does now suggest looking at this page
http://haskell.org/haskellwiki/HaskellWiki:Searching
As I understand it the box at the top is built into the wiki software
and can be improved by upgrading the MediaWiki version/indexing.
(It would be good to report to MediaWiki that the error message should
if possible say Search words of X characters or less are not indexed
when appropriate rather than Zero results. Maybe the latest version
is cleverer.)

The two Google links, neither of them added by me, both have old logos
and seem to give similar results for 'mdo' except that the first returns
results beyond just haskell.org. For this 'mdo' search I can't see any
advantage in having two Google search links.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Input/Output and Monad Transformers

2009-07-03 Thread Richard Kelsall

Gwern Branwen wrote:
...

 Ultimately, the problem with Haskell and ML for our purposes is that
the brightest and most aggressive programmers in those languages,
using the most aggressive optimization techniques known to the
research community, remain unable to write systems codes that compete
reasonably with C or C++. The most successful attempt to date is
probably the FoxNet TCP/IP protocol stack, which incurred a 10x
increase in system load and a 40x penalty in accessing external memory
relative to a conventional (and less aggressively optimized) C
implemenation. [ 4 ,6  ]

http://www.bitc-lang.org/docs/bitc/bitc-origins.html


Interesting paper.

Putting these remarks in context, in case anyone takes them as a current
critique of Haskell, they are apparently about ten years out-of-date and
apply to this SML program

http://www.cs.cmu.edu/~fox/foxnet.html

I wonder what would happen if the program was ported and benchmarked
in a recent version of GHC.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Is 78 characters still a good option? Was: [Haskell-cafe] breaking too long lines

2009-04-21 Thread Richard Kelsall

Dusan Kolar wrote:
...
Or is the reason much deeper? Or, is the bound set to 78 characters just 
because it is as good number as any other?

...
As a little historical detour I think the 80 character limit goes back
to 1928 when IBM designed their punched card format

http://en.wikipedia.org/wiki/Punched_card#IBM_80_column_punch_card_format

I guess it subsequently got embedded into printing and screen widths
and related software. The slightly less than 80 characters allows for
CR LF characters.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Logo Voting has started!

2009-03-19 Thread Richard Kelsall

Deniz Dogan wrote:

IANAL, so would this a problem if this logo won?


Not necessarily. The Lenny222 entry is distinctly multicoloured
unlike TechSmith's which might distinguish it. Trademarks may only
apply to a particular industrial sector or geographical region.
TechSmith is in computer software and seem to be world-wide so
we probably fail on that one.

In the UK they seem to have trademarked the word 'TECHSMITH'

http://www.ipo.gov.uk/types/tm/t-os/t-find/t-find-number?detailsrequested=Ctrademark=E2068609

I can't find their image trademarked maybe because the search on
that site is confusing. I've tried Google but it's very difficult
to find similar abstract images. I'm sure there must be some. If
they haven't trademarked the image anywhere it would suggest either
they aren't too attached to it or somebody else is using something
so close that they can't trademark it. Maybe a search of the US
trademark registry would be more productive.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-09 Thread Richard Kelsall

Don Stewart wrote:

Help identifying and implementing a voting process is very welcome.


Maybe we could have an administrator who receives the votes by email
and we confirm our emailed vote by appending the MD5 of our email to
a Haskell wiki page. The machine-readable email format might be:

I vote for these three logos in order of preference:
23
5
78
Here is my random salt:
kauhgfhgh
Here is the MD5 of the above:
e4d909c290d0fb1ca068ffaddf22cbd0

The administrator can check the MD5s he has received by email and
mark them as good on the wiki page, count the votes and publish
the result.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: The Haskell re-branding exercise

2009-02-09 Thread Richard Kelsall

Simon Marlow wrote:
I suggest we do voting by email, and restrict voting to those who have 
ever posted on haskell-cafe before 1 Jan 2009.  We could then have an 
auto-confirmation scheme similar to mailing list sign-up where the 
confirmation message is sent back to the originator to confirm their 
identity, containing a verification link to click on.


I realise there are flaws in this, but it seems to be (a) cheap to 
implement and participate in, and (b) good enough.




That sounds better than my Haskell Wiki verification method.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How do we decide on the new logo?

2009-01-03 Thread Richard Kelsall

Achim Schneider wrote:
...

Step 3: Re-open contest, accepting submissions _using_ the winning
logo, in the categories a) colour schemes[1] b), official shapes[2] c),
font[3] to go to b), d) layouts of b) + c)

...

This is a good suggestion. I would like small adjustments to the logo
to be possible before it's frozen. Some kind of Step 3 will result
in a much better logo. For example, I really like the pyramid from
above / square containing three triangles that Lenny222 submitted, but
I wouldn't choose this precise colour scheme and form. I didn't have
time to enter an alternative. When the field has been significantly
reduced I think people will be willing to expend effort improving the
remaining entries.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lines of code metrics

2008-08-20 Thread Richard Kelsall

Greg Fitzgerald wrote:
Does anyone know of a good case study comparing a project written in C 
versus one written in Haskell?  I'm mostly looking for a comparison of 
lines of code, but any other metric, such as time to market and code 
quality metrics could also be 


Maybe this one is relevant: Evaluating High-Level Distributed Language
Constructs by Phil Trinder

http://www.macs.hw.ac.uk/~trinder/papers/ICFP2007.pdf

which compares GdH, Erlang and C++.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Documenting the impossible

2008-06-15 Thread Richard Kelsall

Andrew Coppin wrote:
Obviously there is no way in hell this code path can ever be executed. 


Ah, the number of times I've thought that and been wrong :)

From a simplistic perspective what I think you're asking for is
to have obscure occasions when your program goes wrong with random
consequences and doesn't give you an error message. This a programmer's
worst nightmare. I can think of several dimensions of bug difficulty

When the bug exhibits:
  Hard   : Infrequent at run-time.
  Medium : Frequent at run-time.
  Easy   : Compile-time.

Reproducible:
  Hard   : No obvious pattern or way to induce the error.
  Medium : A reasonably strong correlation with some precursor.
  Easy   : Happens every time you do a particular thing.

Error message:
  Hard   : No error message.
  Medium : Non-specific error message.
  Easy   : Error message that relates to an identifiable
   point in your program with relevant state information.

Error message location:
  Medium : Appears on the screen.
  Easy   : Shown on screen and recorded in a log file.

Where it happens:
  Hard   : Only on your user's machine or with their configuration.
  Easy   : On all machines.


By classifying error messages, for bugs which you have stated are
infrequent and difficult to reproduce, as not needing to be displayed
or logged you have pushed all these bugs into the seriously hard
category. By adding 'undefined' subsequent behaviour you have made
these bugs even more difficult. What we should be doing is pushing
every bug in the opposite direction - towards the easy end of all
the above dimensions.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is there some book about haskell and data struct and alg?

2008-05-28 Thread Richard Kelsall

smellcode wrote:

is there some book about haskell and data struct and alg?
i mean data struct and algorithm in haskell

i am freshman

i want to study haskell with data struct and alg


I expect this is obvious, but just in case, there is a list here

http://haskell.org/haskellwiki/Books

which may help. I found Hutton to be an easier read than Hudak,
but Hudak contains some very wonderful things. I look forward to
the new book which you can see bits of here

http://www.realworldhaskell.org/blog/


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-15 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,


Yes it was just a plausible guess, but not contradicted by the experts.
And that was using the Windows version of GHC so other versions may
have better optimisation. I don't know how to check, maybe the experts
can illuminate the subject?


note that my letter was not contradicted too LOL

1. many experts just don't read this list due to its high traffic,
write into ghc-users instead


Good point. I have started a thread here

http://www.haskell.org/pipermail/glasgow-haskell-users/2008-May/014742.html



2. ghc is too large system and no one know all its details. this
particular option may be set up 10 years ago and never
touched/studied by anyone since then. for example, i'm ghc performance
expert [to some degree], but i don't know anything about its build
system. all that i can tell is that ghc base library build times don't
prohibit use of -O2



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

J C wrote:

[EMAIL PROTECTED] wrote:


 Hello JC, I think you've set yourself a challenge there :) Welcome to
 Haskell programming. Taking a Shootout entry and playing with it is
 a great way to learn Haskell. The Shootout provides an example in your
 favourite previous language for comparison and a small well defined
 program with exact test results you can pit your wits against. Fame
 awaits you for a fast and beautiful entry. I'm still learning useful
 things from the Fasta benchmark. It's surprising how many interesting
 things you can discover in a small piece of code.


It may be fun, but I don't think it would be meaningful. My code will
be, most likely, slow, leaving some doubt as to whether it's slow
because of the limitations of the compiler or my inexperience.

On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(


Don't tell the experts who wrote the current shootout entries, but the
playing field is tilted radically in favour of us beginners being able
to improve on their entries because of new versions of GHC and new
tools that have been developed since they wrote their entries.

GHC will now automatically perform many of the optimisations that used
to have to be done by hand. For example I was surprised to discover
the other day when working on Fasta that putting this plain and simple
version of splitAt

splitAt n (x : xs) = (x : xs', xs'')
where (xs', xs'') = splitAt (n-1) xs

in my program made it run more quickly than using the built-in version
of splitAt which I now know looks like (ug!) this

splitAt (I# n#) ls
  | n# # 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# - [a] - ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout). You may similarly find various elaborations in the shootout
entries that were historically necessary for speed or memory reasons,
but which can now be removed because GHC or new libraries do the work
for us.

Have a go and see what happens to the speed when you change things
to make N-body more readable. I would bet money on there being simple
tweaks which will make it simultaneously faster and more readable.


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,

Tuesday, May 13, 2008, 6:10:54 PM, you wrote:


because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout).


-O2 is very old ghc feature and i think that ghc base library is
compiled with -O2 - it's too obvious idea



In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :

http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html

and from this thread

http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html

it appears to be currently unused for splitAt.

I guess -O2 has however been around for a long time.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,


In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :



http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html


it's because ghc is 15 years old and its documentation may be not
updated as things changes :)


and from this thread



http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html



it appears to be currently unused for splitAt.


i've read this thread. it was just assumption - don't believe it
before you have checked it



Hello Bulat,

Yes it was just a plausible guess, but not contradicted by the experts.
And that was using the Windows version of GHC so other versions may
have better optimisation. I don't know how to check, maybe the experts
can illuminate the subject?


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-12 Thread Richard Kelsall

J C wrote:
...

If anyone has a version of the N-body benchmark, where the simulation
is a type-safe pure function, I would very much like to see and time
it.


Hello JC, I think you've set yourself a challenge there :) Welcome to
Haskell programming. Taking a Shootout entry and playing with it is
a great way to learn Haskell. The Shootout provides an example in your
favourite previous language for comparison and a small well defined
program with exact test results you can pit your wits against. Fame
awaits you for a fast and beautiful entry. I'm still learning useful
things from the Fasta benchmark. It's surprising how many interesting
things you can discover in a small piece of code.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Order of Evaluation

2008-05-11 Thread Richard Kelsall

Miguel Mitrofanov wrote:

As I understand it Haskell does not specify an order of evaluation
and it would therefore be a mistake to write a program which relies
on a particular evaluation order. This is the 'unsafe' aspect of
unsafePerformIO.


Hmm... IMHO unsafePerformIO is 'unsafe' because it can lead to type 
errors in runtime. At least, that seems to be much more dangerous.




Oh yes, quite right.

This and especially the wicked unsafeCoerce seem like great ways
to shoot myself in the foot and are strong candidates for inclusion
in entries to the International Obfuscated Haskell competition :)

http://www.haskell.org/pipermail/haskell/2004-August/014387.html
http://www.haskell.org/haskellwiki/Obfuscation


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Cryptographic hash uniquness (was [Haskell-cafe] Simple network client)

2008-05-11 Thread Richard Kelsall

Graham Klyne wrote:
This is a very late response ... but I did some calculations as part of 
some work I did a while ago:

  http://www.ietf.org/rfc/rfc2938.txt
(See appendix A The birthday paradox)

#g


A memorable summary of the birthday paradox being :

There is a 50% chance of a collision when number of digits in your
population size reaches half the number of digits in the largest
possible population.

e.g. With a 128 bit hash you get a 50% chance of a collision when
you've hashed 2^64 files etc.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Order of Evaluation

2008-05-09 Thread Richard Kelsall

PR Stanley wrote:

 (take 4 . map (0)) (f s t)
 where
s = 2 : t
t = 3 : s
 f = zipWith (-)
What would be the order of evaluation for the above code? 


As I understand it Haskell does not specify an order of evaluation
and it would therefore be a mistake to write a program which relies
on a particular evaluation order. This is the 'unsafe' aspect of
unsafePerformIO.

It is entirely at the whim of the compiler writer how it is evaluated
as long as the eventual answer produced is correct. It would be possible
to evaluate it in all sorts of exotic ways, or maybe choose a different
one for each day of the week.

However, you may be asking how does GHC 6.8.2 evaluate it when compiled
at a certain optimisation level so you can make your program run fast
or use less memory. In which case there will be a precise answer to your
question.


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simplest possible Fasta shootout entry. How do I zap the ugly line? Suggest any other improvements.

2008-04-28 Thread Richard Kelsall

Don Stewart wrote:

r.kelsall:

(Extracting these questions from my previous thread for clarity.)

Below is my simplest possible program to solve the Fasta shootout
benchmark.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=all
http://haskell.org/haskellwiki/Shootout/Fasta

I can see one remaining flaw - the line marked 'Ugly'. What's the best
way to get rid of this line?

Any other suggestions for simplifying or improving the program would
also be interesting.

This code is about three or four times slower that the current fastest
GHC entry for the Fasta benchmark. I'll elaborate it for speed when
I've produced the best version regardless of speed.


This is quite nice, and you can probably match the current entry
by switching to lazy ByteString IO, as the current entry does. 


-- Don



Thanks Don :) I'll try that. The thing I really like about this version
is that it localizes the 'breaking the lines at 60 characters' part of
the program to just one function. I would never have thought to do this
in a language other than Haskell and looking through most of the other
language submissions for Fasta I can't see any that abstract this
feature. I seem to be able to think more clearly in Haskell.


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Richard Kelsall

Duncan Coutts wrote:

On Fri, 2008-04-25 at 17:30 +0100, Richard Kelsall wrote:

I've just been investigating a performance oddity in using splitAt
on a long stream of random numbers. I don't understand why GHC
appears to want to traverse the first part of the list twice.

GHC seems to implement the splitAt function something like

splitAt n xs = (take n xs, drop n xs)

whereas Hugs is something like

splitAt n (x : xs) = (x : xs', xs'')
 where (xs', xs'') = splitAt (n-1) xs

which seems much more sensible to me. Wouldn't it be better to change
GHC to the Hugs method? Have I misunderstood something?


Actually GHC uses this definition, in GHC.List:

#ifdef USE_REPORT_PRELUDE

splitAt n xs   =  (take n xs, drop n xs)

#else /* hack away */

splitAt (I# n#) ls
  | n# # 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# - [a] - ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

#endif /* USE_REPORT_PRELUDE */

So ghc's version should be of equivalent strictness to the hugs version.

What's interesting here is that the H98 specification of splitAt is
silly. It got 'simplified' from a previous version of the Haskell spec
and is so doing it was made less strict.

With this definition:
splitAt n xs   =  (take n xs, drop n xs)

splitAt _|_ _|_ = (_|_, _|_)

but with the sensible definition it'd return _|_

and that's really the only point of having splitAt, so that you can walk
down the list once rather than twice. If someone needs the very lazy
version there's always take and drop.

Duncan



That looks good, I didn't see this 'hack away' version when I found
splitAt on the web.

I'm now wondering why my splitAtRK function in the following code
makes it run in 11 seconds given a parameter of 250 but it takes
14 seconds when I change it to splitAt. Am I accidentally invoking
the (take, drop) version of splitAt? Why is mine so much faster than
the built-in version? (Using GHC 6.8.2, W2K, Intel Core 2 Duo 2.33GHz)
Maybe mine isn't working properly somehow.

(I hadn't intended to post this code just yet because I wanted to
do a bit more testing etc then ask for suggestions for simplifying
and improving it. I actually want to get rid of the line containing
splitAt because it's ugly. All suggestions for improvement gratefully
received. The time function is just temporary. This code is about three
or four times slower that the current fastest Haskell entry for the
Fasta shootout benchmark. I'll elaborate it for speed when I've got
down to the simplest version possible.)

Richard.


{-# OPTIONS -O2 -fexcess-precision #-}
--
-- The Computer Language Shootout : Fasta
-- http://shootout.alioth.debian.org/
--
-- Simple solution by Richard Kelsall.
-- http://www.millstream.com/
--

import System

import Text.Printf
import System.CPUTime

time :: IO t - IO t
time a = do
start - getCPUTime
v - a
end   - getCPUTime
let diff = (fromIntegral (end - start)) / (10 ^12)
printf Calc time %0.3f \n (diff :: Double)
return v

main = do
time $ comp

comp :: IO ()
comp = do
n - getArgs = readIO . head

title ONE Homo sapiens alu
writeLined (cycle alu) (n * 2)

title TWO IUB ambiguity codes
let (r1, r2) = splitAtRK (fromIntegral (n * 3)) (rand 42)
writeLined (map (look iubs) r1) (n * 3)

title THREE Homo sapiens frequency
writeLined (map (look homs) r2) (n * 5)

splitAtRK n xs | n = 0 = ([], xs)
splitAtRK _ []  = ([], [])
splitAtRK n (x : xs) = (x : xs', xs'')
   where (xs', xs'') = splitAtRK (n - 1) xs

title :: String - String - IO ()
title a b = putStrLn $  ++ a ++   ++ b

look :: [(Char, Float)] - Float - Char
look [(c, _)] _ = c
look ((c, f) : cfs) r = if r  f
   then c
   else look cfs (r - f)

lineWidth = 60

writeLined :: [Char] - Integer - IO ()
writeLined cs 0 = return ()
writeLined cs n = do
let w = min n lineWidth
(cs1, cs2) = splitAt (fromInteger w) cs
putStrLn cs1
writeLined cs2 (n - w)

rand :: Int - [Float]
rand seed = newran : (rand newseed)
where
im = 139968
ia = 3877
ic = 29573
newseed = (seed * ia + ic) `rem` im
newran = fromIntegral newseed / fromIntegral im

alu = GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA\
  \TCACCTGAGGTCAGGAGTTCGAGACCAGCCTGGCCAACATGGTGAAAGTCTCTACT\
  \ATACATTAGCCGGGCGTGGTGGCGCGCGCCTGTAATCCCAGCTACTCGGGAG\
  \GCTGAGGCAGGAGAATCGCTTGAACCCGGGAGGCGGAGGTTGCAGTGAGCCGAGATCGCG\
  \CCACTGCACTCCAGCCTGGGCGACAGAGCGAGACTCCGTCTCA

iubs = [('a', 0.27), ('c', 0.12), ('g', 0.12), ('t', 0.27), ('B', 0.02),
('D', 0.02), ('H', 0.02), ('K', 0.02), ('M', 0.02), ('N', 0.02),
('R', 0.02), ('S

Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Richard Kelsall

Brandon S. Allbery KF8NH wrote:


On Apr 26, 2008, at 9:02 , Richard Kelsall wrote:


I'm now wondering why my splitAtRK function in the following code
makes it run in 11 seconds given a parameter of 250 but it takes
14 seconds when I change it to splitAt. Am I accidentally invoking


It's somewhat unusual to build the standard libraries with -O2, I 
think.  (It can be done but the build takes a very long time.)


Also, 11 vs. 14 seconds seems not that much of a difference when you're 
talking 250 items, especially given how inefficient Haskell lists 
(Strings are lists) are.




Yes, well spotted. If I lower the compile level from -O2 to -O the
splitAtRK version takes 14.5 seconds vs 14 seconds for the built-in
version. Thank you. That solves the puzzle.

For the benchmarks I expect they use a default packaged GHC, I don't
imagine I could get them to use specially compiled libraries to bump
up the GHC score. Which gives an interesting dilemma of whether I
could get away with adding a small relevant snippet of library code
to the program in order to give it -O2 level compilation. I wonder
how much the other compilers optimise their library code.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Simplest possible Fasta shootout entry. How do I zap the ugly line? Suggest any other improvements.

2008-04-26 Thread Richard Kelsall

(Extracting these questions from my previous thread for clarity.)

Below is my simplest possible program to solve the Fasta shootout
benchmark.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=all
http://haskell.org/haskellwiki/Shootout/Fasta

I can see one remaining flaw - the line marked 'Ugly'. What's the best
way to get rid of this line?

Any other suggestions for simplifying or improving the program would
also be interesting.

This code is about three or four times slower that the current fastest
GHC entry for the Fasta benchmark. I'll elaborate it for speed when
I've produced the best version regardless of speed.

Richard.


{-# OPTIONS -O -fexcess-precision #-}
-- The Computer Language Shootout : Fasta
-- http://shootout.alioth.debian.org/
-- Simple solution by Richard Kelsall.
-- http://www.millstream.com/

import System

main = do
n - getArgs = readIO . head

title ONE Homo sapiens alu
writeLined (cycle alu) (n * 2)

title TWO IUB ambiguity codes
let (r1, r2) = splitAt (fromIntegral (n * 3)) (rand 42)  -- Ugly !!
writeLined (map (look iubs) r1) (n * 3)

title THREE Homo sapiens frequency
writeLined (map (look homs) r2) (n * 5)

title :: String - String - IO ()
title a b = putStrLn $  ++ a ++   ++ b

look :: [(Char, Float)] - Float - Char
look [(c, _)] _ = c
look ((c, f) : cfs) r = if r  f
   then c
   else look cfs (r - f)

lineWidth = 60

writeLined :: [Char] - Integer - IO ()
writeLined cs 0 = return ()
writeLined cs n = do
let w = min n lineWidth
(cs1, cs2) = splitAt (fromInteger w) cs
putStrLn cs1
writeLined cs2 (n - w)

rand :: Int - [Float]
rand seed = newran : (rand newseed)
where
im = 139968
ia = 3877
ic = 29573
newseed = (seed * ia + ic) `rem` im
newran = fromIntegral newseed / fromIntegral im

alu = GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA\
  \TCACCTGAGGTCAGGAGTTCGAGACCAGCCTGGCCAACATGGTGAAAGTCTCTACT\
  \ATACATTAGCCGGGCGTGGTGGCGCGCGCCTGTAATCCCAGCTACTCGGGAG\
  \GCTGAGGCAGGAGAATCGCTTGAACCCGGGAGGCGGAGGTTGCAGTGAGCCGAGATCGCG\
  \CCACTGCACTCCAGCCTGGGCGACAGAGCGAGACTCCGTCTCA

iubs = [('a', 0.27), ('c', 0.12), ('g', 0.12), ('t', 0.27), ('B', 0.02),
('D', 0.02), ('H', 0.02), ('K', 0.02), ('M', 0.02), ('N', 0.02),
('R', 0.02), ('S', 0.02), ('V', 0.02), ('W', 0.02), ('Y', 0.02)]

homs = [('a', 0.3029549426680), ('c', 0.1979883004921),
('g', 0.1975473066391), ('t', 0.3015094502008)]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-25 Thread Richard Kelsall

I've just been investigating a performance oddity in using splitAt
on a long stream of random numbers. I don't understand why GHC
appears to want to traverse the first part of the list twice.

GHC seems to implement the splitAt function something like

splitAt n xs = (take n xs, drop n xs)

whereas Hugs is something like

splitAt n (x : xs) = (x : xs', xs'')
where (xs', xs'') = splitAt (n-1) xs

which seems much more sensible to me. Wouldn't it be better to change
GHC to the Hugs method? Have I misunderstood something?


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Hayoo! beta 0.1

2008-04-06 Thread Richard Kelsall

Timo B. Hübel wrote:

Hello,

we are pleased to announce the first beta release of Hayoo!, a Haskell API 
search engine providing advanced features like suggestions, find-as-you-type, 
fuzzy queries and much more.


Visit Hayoo! here: http://holumbus.fh-wedel.de/hayoo

Please bear in mind that this is still a beta release and we are continuously 
working on further improvements. Our plans for the future include:


- Covering all documentation available at Hackage.
- Compatibility with non-JavaScript enabled browsers.


Yes please.


- Providing a web interface where people can point Hayoo! to an URI linking
  to Haddock documentation which will be automatically included in Hayoo!.

Hayoo! was developed as a use-case for the Holumbus framework, which aims to 
help at the creation of very flexible and highly specialized search engines.


Although Holumbus is still under heavy development and we have no official 
release yet, some informations and a Darcs repository are available at the 
Holumbus homepage: http://holumbus.fh-wedel.de


Any suggestions and feedback is highly welcomed.



Really like it. Looks very smart. Thank you!

Little detail : After visiting a page that appears in the search results
then doing a back button the search I did is no longer there. (On my
eccentric Firefox setup anyway.)


Cheers,
Timo B. Hübel  Sebastian M. Schlatt


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Hayoo! beta 0.1

2008-04-06 Thread Richard Kelsall

Timo B. Hübel wrote:

On Sunday 06 April 2008 16:38:03 Richard Kelsall wrote:

Little detail : After visiting a page that appears in the search results
then doing a back button the search I did is no longer there. (On my
eccentric Firefox setup anyway.)


Hm, that's strange. I know about this problem when using Konqueror, but my 
Firefox (Linux, 2.0.0.13) keeps the results even after visiting the 
documentation and hitting the back button.


Which Firefox version do you use?



I'm still on the old Firefox 1.5.0.5 :

Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.8.0.5) Gecko/20060902
Firefox/1.5.0.5

I expect most people have upgraded to by now so I wouldn't worry about
it if it works in the new version. I just tried again switching off my
Adblock and switching on Java and Javascript, but couldn't seem to get
anything except a blank search page after clicking the back button.
Maybe just my strange setup.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell wiki is searched by google.

2008-02-14 Thread Richard Kelsall

Mads Lindstrøm wrote:

Richard Kelsall wrote:

Daniil Elovkov wrote:

Hello

I remember there was a problem with haskell wiki pages not being indexed 
and searched.


Now it seems to work perfectly. Actually typing 'monomorphism 
restriction' in google I see the appropriate wiki page as the first result.


I must have missed the moment when it was fixed. Just sharing my 
observations. Thanks.

Hello Daniil, I think I may have started that discussion here :

http://www.haskell.org/pipermail/haskell-cafe/2007-November/035127.html

but sadly a search for mdo in the Search box at the top of this page

http://haskell.org/haskellwiki/Haskell

still gives no results. I suspect something needs adjusting in the
configuration of MediaWiki for the haskellwiki.


Not MediaWiki, but the underlying database. If HaskellWiki uses MySql
ft_min_word_len needs to be set to something smaller than four. See
http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html . After
doing this the indexes needs to be rebuild.


Testing using this page

http://www.haskell.org/haskellwiki/Extending_Phooey

by searching for words which I can see are on the page I get

  Not found  Found
  -  -
  mdoPhooey
  mapwire
  wayplay
  letpersistence

So three character words don't get indexed but four character words do.

I mentioned in the previous thread that some longer words weren't
getting indexed. It looks like this has been fixed. All the long words
I just tried worked properly. And, better than Google, the words get
indexed as soon as they are changed. I tried searches for words added
this afternoon and they returned up-to-date results. Excellent! Whoever
it was thank you for fixing the search.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell wiki is searched by google.

2008-02-13 Thread Richard Kelsall

Daniil Elovkov wrote:

Hello

I remember there was a problem with haskell wiki pages not being indexed 
and searched.


Now it seems to work perfectly. Actually typing 'monomorphism 
restriction' in google I see the appropriate wiki page as the first result.


I must have missed the moment when it was fixed. Just sharing my 
observations. Thanks.


Hello Daniil, I think I may have started that discussion here :

http://www.haskell.org/pipermail/haskell-cafe/2007-November/035127.html

but sadly a search for mdo in the Search box at the top of this page

http://haskell.org/haskellwiki/Haskell

still gives no results. I suspect something needs adjusting in the
configuration of MediaWiki for the haskellwiki.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell wiki is searched by google.

2008-02-13 Thread Richard Kelsall

Daniil Elovkov wrote:

Richard Kelsall wrote:

Daniil Elovkov wrote:

Hello

I remember there was a problem with haskell wiki pages not being 
indexed and searched.


Now it seems to work perfectly. Actually typing 'monomorphism 
restriction' in google I see the appropriate wiki page as the first 
result.


I must have missed the moment when it was fixed. Just sharing my 
observations. Thanks.


Hello Daniil, I think I may have started that discussion here :

http://www.haskell.org/pipermail/haskell-cafe/2007-November/035127.html

but sadly a search for mdo in the Search box at the top of this page

http://haskell.org/haskellwiki/Haskell

still gives no results. I suspect something needs adjusting in the
configuration of MediaWiki for the haskellwiki.


Indeed.

But searching with google

   mdo site:haskell.org/haskellwiki

does a pretty good job - 6 results :)


I have to admit I haven't yet discovered how all the documentation fits
together. I set the Search haskell.org link on left on the front page
to search Google like this

mdo site:haskell.org

for mdo which gives 577 results including the relevant page of the
GHC documentation at the top. But I'm not really sure it should search
everything on haskell.org. I can easily change this if anybody suggests
a better search criteria.

I looked all over the wiki for some way to warn users that the search
results might be incomplete. All the relevant bits - like the search
results messages - appeared to be locked against me changing them.
I'm happy to Google for things, but I don't think it's good to have
a misleading search box on the front page.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Cryptographic hash uniquness (was [Haskell-cafe] Simple network client)

2008-02-01 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Peter,

Thursday, January 31, 2008, 8:01:36 PM, you wrote:


files with different content generating the same hash)... My
intuition told me that the odds of two cryptographic hashes (on
meaningful content) colliding was much less than the earth being
destroyed by an asteroid... But this is just intuition... What does
computer science tell us about this?


you may be interested to know that widely used rsync algorithms relies
on 128-bit hashes and its author speculated about its reliability:

http://samba.org/~tridge/phd_thesis.pdf



Interesting paper. Thank you. I had a quick read of the bit relating to
hashes and my understanding is this. He uses a weak, quick and simple
hash to deal with 99.99% (I invented that percentage) of cases - if the
hash is different we know the files are definitely different - if the
hash collides he then does a strong, slow, secure hash and relies on
this as the answer. But he's relying on the strong hash rather than
doing a byte by byte comparison because there is a major cost (a network
transmission of the file) to doing the proper byte by byte comparison.
If you had both files accessible at a low cost it might be better to
byte by byte compare them when you get a collision rather than use
the strong hash.

The right approach may be to assume that collisions will occur and
cater for this properly in the program somehow. A good tip for testing
this sort of thing is to have the length of the hash (maximum size of
the array or whatever you want to test) as a parameter that you can
turn down to a very low value to induce collisions (overflows etc) to
see whether the program still works. And then turn it back up for live
use.

A cryptographic hash appears as a completely random function of the
input so the likelihood of a collision is simply 2^(bits in hash).


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] shootout using 6.6?

2008-01-19 Thread Richard Kelsall

Don Stewart wrote:
...

Yay, constructor specialisation!


http://shootout.alioth.debian.org/gp4/benchmark.php?test=binarytreeslang=all



And it's nice to see GHC 6.8.2 is now nearly ten times faster than
GCC for this benchmark :

http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadringlang=all

Scary to think how fast GHC might become in 6.10 :)


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Not to load Prelude

2008-01-11 Thread Richard Kelsall

Maurí­cio wrote:

  Is it possible not to load Prelude (...)

 (...) NoImplicitPrelude is more aggressive
  than 'import Prelude()'. You will have to
  look elsewhere for a better explanation of
  the difference.

I tried google and ghc homepage, but could
not find “elsewhere” :) Can you give me a
link or somewhere to start from?


I often find it easier, rather than doing a full Google search,
to start with a Search haskell.org on the front page here

http://haskell.org/

This is a Google site search of *.haskell.org/*

But ignore the type-in search box and button at the top of the front
page - they don't work properly.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Please allow beginners to vocalize code. = :: - - -

2008-01-08 Thread Richard Kelsall

A tip for people explaining Haskell to beginners:

The acts of reading and speaking are closely integrated in the brain.
When I am unable to 'say' something it is much more difficult to 'read'
and absorb the text. There appears to be a misconception that it somehow
helps beginners to understand things if they are not told how to 'say'
these strange new symbols. Certainly for me, and I would guess for most
people, this idea is completely wrong. When I 'read' a new operator
such as  =  I want know how to 'say' it.

I don't mean that posts on Haskell-Cafe should do this, but books
and articles aimed at people who haven't used Haskell before should
always vocalize the symbols as soon as they are introduced.

On a related note, if there isn't already, it would be nice to have a
page in the wiki that gives good ways of vocalizing the operators and
vocalizing them in simple code snippets. I might get round to doing
this sometime, maybe just a table something like this :

Operator  Formal   Informal
  --   
::has type
-maps toto
example
f :: Int - Int  f has type Int to Int


Or, to descend into trivia, a subtle distinction might be usefully
made between  =  and  ==

f i = 54 * i f i is 54 times i
x == 27  x equals 27


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please allow beginners to vocalize code. = :: - - -

2008-01-08 Thread Richard Kelsall

Henning Thielemann wrote:

On Tue, 8 Jan 2008, Richard Kelsall wrote:

...

'=' is spoken bind


...

On a related note, if there isn't already, it would be nice to have a
page in the wiki that gives good ways of vocalizing the operators and
vocalizing them in simple code snippets. I might get round to doing
this sometime, maybe just a table something like this :


Got idea, please go ahead! I suggest categories Style or Syntax.



Operator  Formal   Informal
  --   
::has type
-maps toto
example
f :: Int - Int  f has type Int to Int


Is a symbol-by-symbol translation sensible? What about f maps from Int to
Int, plus maps Int to a function, which maps Int to Int



Thank you. I've put a primitive page up here

http://haskell.org/haskellwiki/Pronunciation

under category syntax. Obviously it needs more work, but maybe it
will act as a bit of grit in the oyster.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please allow beginners to vocalize code. = :: - - -

2008-01-08 Thread Richard Kelsall

Derek Elkins wrote:

I don't why you think, [t]here [is] a misconception that it somehow
helps beginners to understand things if they are not told how to 'say'
these strange new symbols.  If you had said, it doesn't help beginners
to be told how to 'say' these symbols, I would have less of an issue.


Sorry, I got a bit convoluted there. I suppose what I'm trying to
say is that I struggle when I am not given direct words to attach to
symbols. A word for the symbol allows me to read the program. Imagine
two experienced Haskell programmers on the phone, one reading a Haskell
program snippet to the other.

For example Hutton first uses the  =  symbol on p.7

In Haskell, every function has a type that specifies the nature of
its arguments and results, which is automatically inferred from the
definition of the function. For example, the function sum has the
following type:
Num a = [a] - a
This type states that for any type a of numbers, sum is a function that
maps a list of such numbers to a single such number.

This is a good description, but doesn't seem a likely way for an
experienced programmer to read that statement over the phone to
somebody else. I am guessing things like

For all Num a, list of a to a.
Num a implies, list of a to a.

In appendix B at the back of the book I find the 'meaning' of  =  is
'class constraint', but again this doesn't seem a likely way of 'saying'
it.

Now supposing you were on the phone to a Haskell programmer and you
wanted to say this

f :: Int - Int

I imagine you might say f maps Int to Int or function f has type
Int to Int. Both symbols have been translated directly to words. Nobody
would say f, colon colon, Int, dash greater than, Int.


I don't think anyone thinks that it is helpful not to provide a reading,
and in my experience, a reading is usually provided, directly or
indirectly, when such things are introduced in (larger)
introductions/tutorials/textbooks.  However, as you've already found,
some things don't seem to have meaningful readings.  E.g. you list - in
the title, but the motivation of that notation has nothing to do with it
having a clear reading, but rather comes from a graphical perspective,
x - f - y 
I.e. that looks like an arrow pointing left with - the tail.  That

said, two readings are indirectly provided in
http://www.haskell.org/arrows/sugar.html



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Importing Data.Char speeds up ghc around 70%

2008-01-03 Thread Richard Kelsall

Simon Marlow wrote:
...
I have seen strange artifacts like this before that turned out to be 
caused by one of two things:


 - bad cache interactions, e.g. we just happen to have laid out the code in
   such a way that frequently accessed code sequences push each other out
   of the cache, or the relative position of the heap and stack have a bad
   interaction.  This happens less frequently these days with 4-way and
   8-way associative caches on most CPUs.

 - alignment issues, such as storing or loading a lot of misaligned Doubles

in the second case, I've seen the same program run +/- 50% in 
performance from run to run, just based on random alignment of the 
stack.  But it's not likely to be the issue here, I'm guessing.  If it 
is code misalignment, that's something we can easily fix (but I don't 
*think* we're doing anything wrong in that respect).


I have an Opteron box here that regularly gives +/- 20% from run to run 
of the same program with no other load on the machine.  I have no idea 
why...



...
This got me wondering how I could test for code misalignment problems.
I expect there's a cleverer way, but how about a single executable
containing several copies of the same code to be tested and a loop
that runs and times the different copies. A consistently higher or
lower runtime from one copy would indicate a misalignment problem.
(I'm assuming the different copies of the code would probably load
at fairly random alignments, random padding could be added.) It might
have to run the copies in a different order each time round the loop
to avoid the possibility of external periodic events affecting a
particular copy.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiler backend question

2008-01-01 Thread Richard Kelsall

Peter Verswyvelen wrote:
...
I was wondering, why doesn't GHC use the GCC (or any other standard 
compiler) backend intermediate code? The backend of GCC generates highly 
optimized code no? Or is the intermediate code format of GCC (or other 
compilers) not suitable for Haskell?

...

My guess is that there is scope for producing assembly code that
out-performs GCC. I speak as someone who has no actual knowledge
of how GCC works :) I have a theory that the back-end of most compilers,
including GCC, are an accretion of many hand-coded hard-coded functions
to perform very specific optimisations for particular instructions for
particular processors. These have been developed over the years by
relatively ad-hoc timings of the precise assembly code the individual
who wrote the function wanted to improve.

So what's wrong with this approach?

When a new processor is introduced it needs a different set of
optimisations, which take time to develop, which means the compiler
is sub-optimal for a while for new processors.

Also, the number of processors we would like the compiler optimised
for is large if we're really trying to get the last few percent from
every processor. Modern processors aren't simple things like the old
6502 or Z80 which had definite, easily predictable, timings. Think
different cache sizes, multiple pipelines, branch prediction etc.

The microarchitecture of Intel and AMD CPU’s: An optimization guide
for assembly programmers and compiler makers here
http://www.agner.org/optimize/

It seems unlikely therefore that the code generation for every modern
processor variant will be hand-optimised to account for all the small
details. Even in GCC, which no doubt has a huge amount of continuous
effort applied to it by many clever people, they are never going to
hand-code every complexity of every processor.

Take a small example : integer multiplication. Looking at the table
in the conclusion of the microarchitecture PDF we see that different
x86 processors have different relative speed of adding and multiplying.
Something like this

Execution latencies, typical   AMD64   P4EPM   Core2
integer addition 1  1  1  1
integer multiplication   3 10  4  3

We can compute
4 * 371
faster on P4E by doing
371 + 371 + 371 + 371
but on Core2 and AMD64 we should do
4 * 371
and on PM it might depend on other factors, I don't know. Now supposing
we allow powers of two by shifting as well as integer addition, can we
immediately say the best way of multiplying two integers on all these
processors? Maybe roughly, but do we always get the absolute fastest
code? For example it will depend on the values of the integers.
If we know one of them in advance as I've assumed above, or if we
know they frequently have particular values we might handle them
first etc. This is just one small optimisation out of many. And the
optimisations might interact.

So the inevitable conclusion is that the precise assembly code
optimisations must not be hand-coded directly into the back-end of
the compiler, as I expect they are currently in most compilers, but
that they need to be expressed at a higher level repeated addition
is equivalent to multiplication and adapted down to particular
processors and situations based on extensive, genetic algorithm
guided, timing runs.

Anyway, that's my wild idea thrown in the pot. Might be a nice research
paper in there for someone :)


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A tale of three shootout entries

2007-11-30 Thread Richard Kelsall

Sterling Clover wrote:
I'm still curious if the pre-calculation of partial sums that I did 
works well across processors, as I don't see why it shouldn't. My 
less-strictified version of Don's code is attached, and below are the 
functions you'll need to insert/replace to make the partial-sums 
optimization work.


Hello Sterling, I've timed your new Fasta with optimised bangs - it's
the fastest so far. But the pre-calculated partial-sums version seems
to go a bit slower for some unknown reason.

  Seconds
Optimised bangs program11.20compiled ghc --make
Optimised bangs program10.73compiled with -O -fglasgow-exts
   -optc-mfpmath=sse -optc-msse2
   -optc-march=pentium4
Partial-sums program   11.97compiled ghc --make
Partial-sums program   11.14compiled with -O -fglasgow-exts
   -optc-mfpmath=sse -optc-msse2
   -optc-march=pentium4

This is on my GHC 6.6.1, W2K, Intel Core 2 Duo 2.33GHz machine - same
as for the previous timings I gave in this thread.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A tale of three shootout entries

2007-11-27 Thread Richard Kelsall

Sterling Clover wrote:
...

Finally, there's fasta.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=ghcid=2 

This one really depresses me. It outperforms the previous version by 
roughly 20% on my machine (PPC) but underperforms by roughly the same 
amount on the shootout box.  

...

Well done. Great, I'll have a play with your new version of Fasta, I've
just upgraded to an Intel Core 2 Duo 2.33GHz.

Something I found with Dons version on my machine was that if I removed
all the exclamation marks and the -fbang-patterns bit at the top it went
about 20% faster as well as being much cleaner code, but with my very
rudimentary understanding of Haskell I wasn't entirely sure it would
produce the same results if I did this and didn't get round to checking.

I suspect the majority of Fasta time is spent in the rand routine. My
wild cunning plan for this was that it might be possible to avoid doing
the conversion to a float every time the routine is called. My thinking
is that it could just return an Int most of the time because the number
is only used in, I think, a less-than comparison outside rand which
could almost always be decided by an Int less-than comparison rather
than a float less-than comparison. A clever lazy less-than mechanism
could get rand to give a float when the Int comparison is too close to
be certain. Probably be classified by the shootout as cheating though.
And it's way beyond what I could currently write in Haskell.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A tale of three shootout entries

2007-11-27 Thread Richard Kelsall

Simon Peyton-Jones wrote:

| Something I found with Dons version on my machine was that if I removed
| all the exclamation marks and the -fbang-patterns bit at the top it went
| about 20% faster as well as being much cleaner code, but with my very
| rudimentary understanding of Haskell I wasn't entirely sure it would
| produce the same results if I did this and didn't get round to checking.

If, after investigation (and perhaps checking with Don) you find that adding 
bangs makes your program go slower, even though the function is in fact strict 
(otherwise it might go slower because it's just doing more work!) then I'd love 
to see a test case.


Sorry, I don't understand the code, I've jumped in the deep-end before
learning to swim, but I can now tell you it's producing the same results
when I remove some of the exclamation marks. I've checked with an MD5 on
the output.

The timings in seconds for 10,000,000 iterations averaged over 5 runs.
(There was quite a bit of variation.) Compiled with GHC 6.6.1. (I got
stuck compiling it under 6.8) The fancy compile options are from the
shootout page.

Dons original program  13.26compiled ghc --make
Dons original program  12.54compiled with -O -fglasgow-exts
   -optc-mfpmath=sse -optc-msse2
   -optc-march=pentium4
Removed 3 bangs from rand  11.47compiled ghc --make
Removed 3 bangs from rand  11.57compiled with -O -fglasgow-exts
   -optc-mfpmath=sse -optc-msse2
   -optc-march=pentium4

The code below is Dons program from

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=ghcid=0

with a timing function added by me. The rand function is where I removed
three exclamation marks to make the program faster. Previously I removed
different combinations of bangs. Some bangs seem to make it faster and
some seem to make it slower.

Richard.



--
{-# OPTIONS -O2 -optc-O2 -optc-ffast-math -fbang-patterns 
-fexcess-precision #-}

--
-- The Computer Language Benchmarks Game
-- http://shootout.alioth.debian.org/
--
-- Contributed by Don Stewart
-- A lazy bytestring solution.
--
-- Add:
-- -optc-mfpmath=sse -optc-msse2
--

import System
import Data.Word
import Control.Arrow

import Text.Printf -- RK added.
import System.CPUTime  -- RK added.

import qualified Data.ByteString.Lazy as L
import qualified Data.ByteString.Lazy.Char8 as C (pack,unfoldr)
import qualified Data.ByteString as S
import Data.ByteString.Base


-- RK added this time function.
time :: IO t - IO t
time a = do
start - getCPUTime
v - a
end   - getCPUTime
let diff = (fromIntegral (end - start)) / (10 ^12)
printf Calc time %0.3f \n (diff :: Double)
return v


main = do -- RK modified main to time the computation.
time $ comp   -- RK mod.

comp :: IO Int-- RK mod.
comp = do -- RK mod. This was Dons main. I just renamed to comp.
n - getArgs = readIO . head
writeFasta  ONE   Homo sapiens alu   (n*2) (L.cycle alu)
g - unfold TWO   IUB ambiguity codes(n*3) (look iubs) 42
unfold  THREE Homo sapiens frequency (n*5) (look homs) g


--
-- lazily unfold the randomised dna sequences
--

unfold l t n f !g = putStrLn ( ++ l ++   ++ t)  unroll f g n

unroll :: (Int - (Word8, Int)) - Int - Int - IO Int
unroll f = loop
where
loop r 0   = return r
loop !r !i = case S.unfoldrN m (Just . f) r of
(!s, Just r') - do
S.putStrLn s
loop r' (i-m)
  where m = min i 60

look ds !k = let (d,j) = rand k in (choose ds d, j)

choose :: [(Word8,Float)] - Float - Word8
choose [(b,_)]   _ = b
choose ((!b,!f):xs) !p = if p  f then b else choose xs (p-f)


--
-- only demand as much of the infinite sequence as we require

writeFasta label title n s = do
 putStrLn $  ++ label ++   ++ title
 let (t:ts) = L.toChunks s
 go ts t n
  where
 go ss !s !n
| l60  n60 = S.putStrLn lgo ssr (n-60)
|n60 = S.putStr s  S.putStrLn a  go (tail ss) b (n-60)
| n = ln= S.putStrLn (S.take n s)
| otherwise  = S.putStr s  S.putStrLn (S.take (n-ln) (head ss))
where
!ln   = S.length s
!l60  = ln = 60
!n60  = n  = 60
(l,r) = S.splitAt 60 s
(a,b) = S.splitAt (60-ln) (head ss)



im  = 139968
ia  = 3877
ic  = 29573

rand :: Int - (Float, Int)
rand seed = (newran,newseed) -- RK modified. Was !seed
where
newseed = (seed * ia + ic) `rem` im  -- RK mod. Was 

Re: [Haskell-cafe] A tale of three shootout entries

2007-11-27 Thread Richard Kelsall

Don Stewart wrote:
...


There may well have been changes to the strictness analyser that make
some of the bangs (or most) unnecessary now. Also, its very likely 
I didn't check all combinations of strict and lazy arguments for the

optimal evaluation strategy :)


I suspect the optimum details will change again when we get to GHC 6.8.
Yes, I got bored trying different combinations too. A genetic algorithm
that knocks out different combinations might be fun. The ones in rand
seem to make the most difference and I decided the code was easier to
read if I took them all out.


If it seems to be running consitently faster (and producing better Core
code), by all means submit. I don't think this is a ghc bug or anything
like that though: just overuse of bangs, leading to unnecessary work.

-- Don



It was consistently faster on my machine, but it would be interesting
to compare with the run-times on Sterling's PPC machine. I'll have a
play with Sterling's new program and report.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Searched for mdo on haskell.org. Found nothing.

2007-11-25 Thread Richard Kelsall

Tillmann Rendel wrote:

Andrew Coppin wrote:
In general, I find *most* search functions to be fairly unhelpful. 
Google is the shining exception to this rule; it almost always seems 
to figure out what you're after.


I guess doing text searching is just a fundamentally difficult 
problem, and the guys at Google have spent a hell of a long time on it.


text searching is a well-known problem. ranking search results by 
relevance is the key to google's success. read the paper about google to 
learn more:


Sergey Brin and Lawrence Page,
The Anatomy of a Large-Scale Hypertextual Web Search Engine
in: Proceedings of the 7th International WWW Conference, 1998, Brisbane

http://infolab.stanford.edu/~backrub/google.html
http://infolab.stanford.edu/pub/papers/google.pdf


Thank you for that link. An interesting paper. I hadn't seen it before.

I've added a link to a Google site search to the haskell.org front page.
I can't find a way to link the existing search box at the top directly
to it and can't create a 'form' element, so I've created a link labelled
'Search haskell.org' to this intermediate page on my site

http://www.millstream.com/haskellorgsearch.html

If someone can point the haskell.org search box directly at the
Google site search please do so. It currently searches *.haskell.org/*
which may be too broad? I can adjust as required.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Searched for mdo on haskell.org. Found nothing.

2007-11-22 Thread Richard Kelsall

I was reading the 'Problems with do notation' thread and Thomas
Schilling suggested reading about mdo. Not knowing mdo I thought
that sounds interesting and went to

http://haskell.org/

which redirects you to

http://haskell.org/haskellwiki/Haskell

and gives you a search box. Typing mdo and clicking the Search
button gives

Showing below 0 results starting with #1.
No page title matches
No page text matches
Note: unsuccessful searches are often caused by searching for common
words like have and from, which are not indexed, or by specifying
more than one search term (only pages containing all of the search
terms will appear in the result).

Maybe mdo is too common to be indexed?

So I went to Google and searched for Haskell mdo and top of the
list is this page :

http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html

which is Chapter 8. GHC Language Features 8.3. Syntactic extensions
which describes mdo.

Did I do something wrong when searching haskell.org?


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Searched for mdo on haskell.org. Found nothing.

2007-11-22 Thread Richard Kelsall

Thomas Schilling wrote:

On Thu, 2007-11-22 at 13:23 +, Richard Kelsall wrote:

I was reading the 'Problems with do notation' thread and Thomas
Schilling suggested reading about mdo. Not knowing mdo I thought
that sounds interesting and went to


Gah, I was too lazy to add the proper references:

 A Recursive do for Haskell by Erkök and Launchbury
  http://www.cse.ogi.edu/PacSoft/projects/rmb/recdo.pdf

 A Recursive do for Haskell: Design and Implementation 
 by Erkök and Launchbury

  http://www.cse.ogi.edu/PacSoft/projects/rmb/mdo.pdf

And here's a nice use case for it:

  Assembly: Circular Programming with Recursive do by O'Connor
  http://www.haskell.org/sitewiki/images/1/14/TMR-Issue6.pdf


Thank you. I'll have a read of those. I didn't mean to suggest you
should have given all the details, just that haskell.org confused
me by saying mdo didn't exist.

I've just tried some other words from that page in the Search box and
couldn't see any pattern to whether the page appears in the search
results. Very strange. Maybe a message along the lines of We are
hoping some generous person will improve this search feature at some
point. The source code is here ... The search currently produces
incomplete results. Please try Google if you do not find an answer.
would be useful?


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Searched for mdo on haskell.org. Found nothing.

2007-11-22 Thread Richard Kelsall

Mads Lindstrøm wrote:
...

Did I do something wrong when searching haskell.org?

Properly not. I think the problem is that haskell.org do not index
words, that have length = 3. MediaWiki (which I think haskell.org uses)
do not by default index short words (length = 3 or length = 4 - can't
remember which).

If you search for yhc you also get zero results, which does not make
sense either.

...

Yes, it does look like MediaWiki and so I guess from the number of
configuration options (if this is the right software)

http://www.mediawiki.org/wiki/Manual:Configuration_settings

there ought to be some clever setting somewhere rather than it needing
any programming. Maybe the full search has been switched off to save
resources? Experimenting with the search it seems to miss some longer
words too, but I can't see a pattern to it.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Somewhat random history question

2007-11-11 Thread Richard Kelsall

Andrew Coppin wrote:
...if GHC is written in Haskell, how the heck did they compile GHC in 
the first place?


The paper A History of Haskell: Being Lazy With Class by Paul Hudak,
John Hughes, Simon Peyton Jones and Philip Wadler is a good read.

http://research.microsoft.com/~simonpj/papers/history-of-haskell/index.htm

A snippet

GHC was begun in January 1989 at the University of Glasgow, as soon
as the initial language design was fixed. The first version of GHC was
written in LML by Kevin Hammond, and was essentially a new front end to
the Chalmers LML compiler. This prototype started to work in June 1989,
just as Peyton Jones arrived in Glasgow to join the burgeoning
functional programming group there. The prototype compiler implemented
essentially all of Haskell 1.0 including views (later removed), type
classes, the deriving mechanism, the fullmodule system, and binary I/O
as well as both streams and continuations. It was reasonably robust
(with occasional spectacular failures), but the larger Haskell prelude
stressed the LML prelude mechanism quite badly, and the added complexity
of type classes meant the compiler was quite a lot bigger and slower
than the base LML compiler.
...
GHC proper was begun in the autumn of 1989, by a team consisting
initially of Cordelia Hall, Will Partain, and Peyton Jones. It was
designed from the ground up as a complete implementation of Haskell in
Haskell, bootstrapped via the prototype compiler. The only part that was
shared with the prototype was the parser, which at that stage was still
written in Yacc and C. The first beta release was on 1 April 1991 (the
date was no accident), but it was another 18 months before the first
full release (version 0.10) was made in December 1992.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] viewing HS files in Firefox

2007-10-30 Thread Richard Kelsall

Jules Bean wrote:

Isaac Dupree wrote:
When I try to go to one of the Module.hs files, e.g. on 
darcs.haskell.org, it now has type HS and Firefox refuses to display 
it (and only lets me download it).  Does anyone know how to make 
Firefox treat certain file types as others (HS as plain text, in 
particular)? so that I can browse them with any convenience




It is really annoying, and it is an astoundingly old bug in firefox. 
Apparently it's very hard to fix due to annoying details of the firefox 
architecture.


It would be simplest for everyone if haskell.org was prepared to send 
out the files as text/plain (even though this is the wrong mime type), 
as I believe it used to do.

...

Yes, it does appear to be a bug in Firefox

https://bugzilla.mozilla.org/show_bug.cgi?id=57342

not to attempt to display text/x-haskell as if it were text/plain,
but to get really obsessive I'm not convinced text/plain is strictly
speaking the 'wrong' media-type if that's what the user-agent requests.
For example, my FireFox 1.5.0.5 says to the server it will Accept these
media-types

text/xml, application/xml, application/xhtml+xml, text/html; q=0.9,
text/plain; q=0.8,
image/png,*/*; q=0.5

This is in order of what it would most like to get back from the server.
The server then goes off and tries to find the best media-type for my
browser - it can supply different ones depending on what the browser
says it wants. By returning it as text/x-haskell the server has given
the resource to my browser in */* which is the least wanted media-type.
This is perfectly correct behaviour, but if the server was also capable
of providing the same thing as text/plain it would be better to give
this, or even better a pretty coloured text/html one if the server had
one available. I think the underlying file returned as text/x-haskell
or text/plain can be the exact same file assuming all x-haskell are
also plain.

Could be wrong, but that's my understanding of content negotiation.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] viewing HS files in Firefox

2007-10-28 Thread Richard Kelsall

Thomas Schilling wrote:

On Sat, 2007-10-27 at 18:48 -0400, Isaac Dupree wrote:
When I try to go to one of the Module.hs files, e.g. on 
darcs.haskell.org, it now has type HS and Firefox refuses to display it 
(and only lets me download it).  Does anyone know how to make Firefox 
treat certain file types as others (HS as plain text, in particular)? 
so that I can browse them with any convenience


I believe those kinds of problem have to do with the MIME-encoding on
the server side:  The server uses text/x-haskell.  For Firefox to
display the document inline it probably has to be text/plain.  Not sure
what the proper fix is, though.


It should probably be fixed in one of the Apache config files. In the
HTTP headers from darcs.haskell.org (viewed with the Live HTTP headers
extension in FireFox) I'm getting

Server: Apache/2.2.3 (Debian)
Content-Type: text/x-haskell

returned when I request the Map.hs file. For Apache 2.2 there seem to
be various solutions. The web admin should probably be looking at these
sorts of things

http://httpd.apache.org/docs/2.2/mod/mod_mime.html#addtype
http://httpd.apache.org/docs/2.2/mod/core.html#forcetype
http://httpd.apache.org/docs/2.2/mod/core.html#defaulttype

and maybe rummaging in file httpd.conf


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] viewing HS files in Firefox

2007-10-28 Thread Richard Kelsall

Shachaf Ben-Kiki wrote:

Richard Kelsall wrote:

Thomas Schilling wrote:

On Sat, 2007-10-27 at 18:48 -0400, Isaac Dupree wrote:

When I try to go to one of the Module.hs files, e.g. on
darcs.haskell.org, it now has type HS and Firefox refuses to display 

 it (and only lets me download it).  Does anyone know how to make
 Firefox treat certain file types as others (HS as plain text,
 in particular)? so that I can browse them with any convenience

I believe those kinds of problem have to do with the MIME-encoding on
the server side:  The server uses text/x-haskell.  For Firefox to
display the document inline it probably has to be text/plain.  Not sure
what the proper fix is, though.

It should probably be fixed in one of the Apache config files. In the
HTTP headers from darcs.haskell.org (viewed with the Live HTTP headers
extension in FireFox) I'm getting


Fixed? It's already fixed. text/x-haskell is the right Content-Type to
send; this is Haskell after all, not just plaintext. It's your
browser's responsibility to display it any way it likes. People have
had this discussion in #xmonad, and it seems that MIME edit
(https://addons.mozilla.org/en-US/firefox/addon/4498) is a workable
solution (I don't use it myself, though). Of course, Sterling Clover's
suggestion (Open in Browser) would also work well.


Hello Shachaf, I hope you intended this for the list, as I haven't
seen it come through, if not my apologies for posting it.

Ah, yes I see what you mean, text/x-haskell is the best media-type,
however I think it ought to be possible to use 'content negotiation'

http://httpd.apache.org/docs/2.2/content-negotiation.html

with more than one representation to supply the same file as text/plain
to non-Haskell-capable browsers. I haven't actually done this though
so can't be certain of it or provide configuration details. If somebody
really wanted to get clever with it it could also supply pretty
coloured Haskell for a text/html representation.

Sorry, I couldn't find the #xmonad discussion, is this published
somewhere?


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New slogan for haskell.org

2007-10-05 Thread Richard Kelsall

Albert Y. C. Lai wrote:


... I'll ask the Greg Wilson question: where is your data? ...


I think we could make some sort of claim for the concise nature of
Haskell code based on Evaluating High-Level Distributed Language
Constructs by Phil Trinder

http://www.macs.hw.ac.uk/~trinder/papers/ICFP2007.pdf

which suggests a code size ratio approximately GdH = 1, Erlang = 6,
C++ = 42. I'm assuming GdH is sufficiently close to vanilla Haskell
for this result to apply to Haskell too.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] -O2 compile option can give speed increase over -O. Fasta shootout program test runs.

2007-07-16 Thread Richard Kelsall

I have been playing with the Fasta program in the shootout to see if
I can make it umm faster. Starting from dons program on this page and
adding some timing calculations as suggested on this wiki page

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fastalang=ghcid=2
http://www.haskell.org/haskellwiki/Timing_computations

I added different OPTIONS into the top line of the program did a
ghc --make fasta.hs   and ran it each time with  fasta 250
(This is one tenth of the shootout figure.) These runs all keep the
existing OPTIONS of  -fbang-patterns -fexcess-precision

  Seconds   OPTIONS Added
  ---   -
   40.5
   40.5-funbox-strict-fields
   40.4  {-# INLINE rand #-}
   17.2-O
   17.0-O  -fvia-C
   14.4-O  -optc-march=pentium4
   11.5-O2
   11.2-O3
   11.5-O3   {-# INLINE rand #-}
   11.3-O2 -optc-march=pentium4

There was a bit of variation, I've averaged over two runs. This is on
an Intel Pentium D 2.66GHz running W2K and GHC 6.6.1.

It seems the -O2 option can give a significant speed increase relative
to just the -O option. This is contrary to the documentation which says

http://www.haskell.org/ghc/docs/latest/html/users_guide/options-optimise.html
http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html

it won't make any difference. I guess it's program, architecture and
operating system specific, but according to these figures the -O2 option
seems well worth a try for programs that need speed. It may be that
we sacrifice bounds checking or something important with -O2, I don't
know.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe