[Haskell-cafe] stateful walk through a tree?

2007-02-19 Thread David Tolpin
Hi,

I am looking for help in design of a stateful tree walker. I have a tree-like 
data structure (an AST for a small language) for which there is a general 
stateful walk procedure. That is, each node type is an instance of a walker 
class, and the walk function is monadic:

class Walker a where
   walk :: a - State Context b

The context is used to store names in the scope, for example.  Now, I'd like to 
use Walker as a general class for implementing several different 
transformations on the tree (like cross-referencing, code emission, tree 
visualisation). Those transformations will need expanded state. What is the 
proper design for that?

In Common Lisp, for example, I would Context to contain an opaque slot 
(visitor), and would assign additional state information to it; a would also 
define a generic function that would dispatch on the visitor (and probably on 
walker if I end up having more than one walker).

How would I do that in Haskell? I'd like to avoid using mutable variables for 
now (mostly for didactic puproses).

David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] speeding up fibonacci with memoizing

2007-02-19 Thread Stefan O'Rear
On Mon, Feb 19, 2007 at 08:47:39AM +0100, Mikael Johansson wrote:
 On Sun, 18 Feb 2007, Yitzchak Gale wrote:
 Besides memoizing, you might want to use the fact
 that:
 
 fib (2*k) == (fib (k+1))^2 - (fib (k-1))^2
 fib (2*k-1) == (fib k)^2 + (fib (k-1))^2
 
 
 Or, you know, go straight to the closed form for the fibonacci numbers! :)

That's fine in the blessed realm of arithmatic rewrite rules, but
here we need bitstrings, and computing large powers of irrational numbers
is not exactly fast.

Phi is definable in finite fields (modular exponentiation yay!) but 
modular-ation
seems ... problematic.

I have a gut feeling the p-adic rationals might help, but insufficient knowledge
to formulate code.

The GMP fibbonacci implementation is of my quasilinear recurrence family, not
closed form.

And lest we forget the obvious - by far the fastest way to implement fib in GHC 
Haskell:

{-# OPTIONS_GHC -O2 -cpp -fglasgow-exts #-}
module 
#ifdef fibimpl
Main(main)
#else
Fibs
#endif
where
import System.Environment
import Array
import List(unfoldr)

#ifdef __GLASGOW_HASKELL__
import System.IO.Unsafe
import Foreign.Marshal.Alloc
import Foreign.Storable
import GHC.Exts
import Foreign.C.Types
#endif

-- same as before

#ifdef __GLASGOW_HASKELL__
foreign import ccall gmp.h __gmpz_fib_ui _gfib :: Ptr Int - CULong - IO ()
foreign import ccall gmp.h __gmpz_init _ginit :: Ptr Int - IO ()

gmpfib :: Int - Integer
gmpfib n = unsafePerformIO $ allocaBytes 12 $ \p - do
_ginit p
_gfib p (fromIntegral n)
I# sz - peekElemOff p 1
I# pt - peekElemOff p 2
return (J# sz (unsafeCoerce# (pt -# 8#)))
#endif

-- same as before 

[EMAIL PROTECTED]:~/fibbench$ ./h gs 1
12.84user 0.24system 0:13.08elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+21082minor)pagefaults 0swaps
[EMAIL PROTECTED]:~/fibbench$ ./h gmp 1
9.12user 0.42system 0:09.58elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+35855minor)pagefaults 0swaps
[EMAIL PROTECTED]:~/fibbench$
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] FFI basics

2007-02-19 Thread Simon Peyton-Jones
| Thanks to everyone for all the help! Everything
| is working for me now. It turns out that the main
| detail I was missing was exactly what commands
| to type to compile it, and how to use it in GHCI.
| Pretty important detail, actually.
|
| Alistair - yes, there are a few simpler pages about
| FFI on the old wiki that I missed, thanks
| for the helpful links.
|
| I'll try to put it all up on the new wiki in a few days
| when I have time. What Bulat wrote is in my opinion
| _exactly_ what is needed. I'll incorporate the suggestion
| of Don and Sven, and then combine it with the two
| simpler pages from the old wiki.

Yitz, Please do make time to do this!  This is the moment, while it is still 
fresh in your mind.  You have learned something that wasn't obvious; it will 
save others going through the same loop if you write it down, and you are the 
ideal person to do so.

The obvious place to do so would be here
http://haskell.org/haskellwiki/GHC/Using_the_FFI

PS: be sure to include Sven's advice in this thread (below)

Thanks

Simon


|   foreign import ccall mysin.h mysin
| c_mysin :: Double - Double
| 
|  Shouldn't that be CDouble? At least for Int/CInt you can hit troubles on
|  64 bit machines...
|
| Yes, the code above is wrong in the sense that it makes assumptions which are
| not guaranteed at all in the FFI spec. The rules to follow are extremely
| simple, so there is no reason to ignore them:
|
|* If you want to use a C type foo in Haskell, use CFoo in the FFI
| (imported from Foreign.C.Types).
|
|* If you want to use a Haskell type Bar in C, use HsBar in C code
| (#included from HsFFI.h).
|
| It depends on the application/library in question which alternative is easier,
| but never use a mix.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] speeding up fibonacci with memoizing

2007-02-19 Thread Bertram Felgenhauer
Stefan O'Rear wrote:
 Prior art trumps all.  (by a few %)  granted it doesn't do much memoizing 
 anymore :)
 
 gs  ajb  f  d  u, it, z  s  n

[snip]

Nice. I took the opportunity to polish my generic linear recurrence
module somewhat and test its speed. It does quite well I think:

using
  http://int-e.home.tlink.de/haskell/LinRec.hs

and defining

 import qualified LinRec as L
 
 -- generic linear recurrence generator
 genfib = L.get [1,1] [0,1]

I get:

# ./t.sh gen 5000
11.65user 0.06system 0:11.71elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+17092minor)pagefaults 0swaps
# ./t.sh gs 5000
4.67user 0.06system 0:04.79elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+11746minor)pagefaults 0swaps

for a slowdown of about 2.5.

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: stateful walk through a tree?

2007-02-19 Thread apfelmus
David Tolpin wrote:
 I am looking for help in design of a stateful tree walker.

I think that you can use Data.Traversable to great effect. Related to
that are Control.Applicative and Data.Foldable. The papers that are
mentioned in the Haddocks explain what these modules do and what they
are useful for.

 How would I do that in Haskell? I'd like to avoid using mutable variables for 
 now
 (mostly for didactic puproses).

Well, Haskell doesn't have mutable variables as LISP or ML do. In the
end, avoiding mutable variables is more useful for non-didactic purposes :)

Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Donald Bruce Stewart

Following recent discussion about a cross-implementation performance
benchmark suite, based on nofib, I've gone and combined nofib with the
great language shootout programs, and rewritten the build system to
support cross implementation measurements.

The result is:

nobench
http://www.cse.unsw.edu.au/~dons/nobench.html

The benchmark suite runs regularly, and currently reports the
speed of each program in the suite, running under each system. The
results are quite interesting. The most recent run is available:

http://www.cse.unsw.edu.au/~dons/nobench/bench.results
http://www.cse.unsw.edu.au/~dons/nobench/bench.log

The programs are a mixture of traditional nofib style Haskell, with more
performance-tuned code from the shootout. More tweaking is required to
help better support nhc and yhc (and jhc, and ...).

The entire benchmark set and framework is available via darcs:

darcs get http://www.cse.unsw.edu.au/~dons/code/nobench

Currently todo are porting the rest of nofib, pretty graphs of the
results (and html),  and memory use measurements.

Patches welcome! 

Cheers,
   Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Neil Mitchell

Hi Dons,


nobench
http://www.cse.unsw.edu.au/~dons/nobench.html


Yhc is consistently half the speed of nhc, whereas in our tests, its
typically 20% faster. Can you make sure you've  built Yhi with -O
(scons type=release should do it). I opened a bug just a few days ago,
because I realised all benchmark's would get run at no optimisation
otherwise :)

If anyone wants a project finding out what flags to build Yhi with to
get the best performance here would be nice to see :)

Why does the integrate benchmark import both System and
System.Environment? Yhc currently doesn't export getArgs from System,
only System.Environment. (And yes, we really should fix that!)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Ketil Malde

Donald Bruce Stewart wrote:

Following recent discussion about a cross-implementation performance
benchmark suite, based on nofib, I've gone and combined nofib with the
great language shootout programs, and rewritten the build system to
support cross implementation measurements.
  

Great work!

..but I wonder if the shootout is really the kind of code that is ideal
for compiler benchmark.  Typically (at least based on what I've seen
of the submissions) they tend to be fairly heavily tuned, using 
optimizations

that are a) obfuscating the code and b) tuned specifically for GHC.

(Another potential issue that follows from this is how to resolve a 
modification

to a benchmark that makes one compiler faster at the expense of another.)

Wouldn't it be better to benchmark a more idiomatically correct codebase?

-k
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Dougal Stanton
Quoth Ketil Malde, nevermore,
 
 Wouldn't it be better to benchmark a more idiomatically correct codebase?


I suppose the ideal way to do it would be benchmarks for the (1) idiomatic
and (2) the highly tuned implementations. Then the compiler writers can
push 1 towards 2, while the pesky shootout implementers can move the
goalposts of 2. ;-)

In reality this may just foster a small set of horribly specialised
optimisers in the compilers, with little benefit for real-world usage. :-(

Cheers,

D.

-- 
Dougal Stanton
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Felipe Almeida Lessa

On 2/19/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:

results are quite interesting. The most recent run is available:

http://www.cse.unsw.edu.au/~dons/nobench/bench.results
http://www.cse.unsw.edu.au/~dons/nobench/bench.log


Maybe I'm missing something, but how can ghci beat ghc (on pidigits)?

BTW, nice compilation of tests =).

--
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Donald Bruce Stewart
felipe.lessa:
 On 2/19/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:
 results are quite interesting. The most recent run is available:
 
 http://www.cse.unsw.edu.au/~dons/nobench/bench.results
 http://www.cse.unsw.edu.au/~dons/nobench/bench.log
 
 Maybe I'm missing something, but how can ghci beat ghc (on pidigits)?
 
 BTW, nice compilation of tests =).

As far as I can see, this benchmark relies soley on how fast gmp is.
There's very little overhead other than that. More investigation
required though.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Donald Bruce Stewart
ithika:
 Quoth Ketil Malde, nevermore,
  
  Wouldn't it be better to benchmark a more idiomatically correct codebase?
 
 
 I suppose the ideal way to do it would be benchmarks for the (1) idiomatic
 and (2) the highly tuned implementations. Then the compiler writers can
 push 1 towards 2, while the pesky shootout implementers can move the
 goalposts of 2. ;-)
 
 In reality this may just foster a small set of horribly specialised
 optimisers in the compilers, with little benefit for real-world usage. :-(
 

I think more likely, and hopefully, we'll use this to check that things
aren't getting worse from release to release.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHCi and multi-line support ?

2007-02-19 Thread Dunric
Is it possible to write multi-line definitions in GHC interactive interpreter ? 
(like in Python interp. ?)

When I try to write f.E. if-then-else in more then 1 line I get the following 
error:
interactive:1:30: parse error (possibly incorrect indentation)

--

http://icqsms.cz/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Donald Bruce Stewart
Ketil.Malde:
 Donald Bruce Stewart wrote:
 Following recent discussion about a cross-implementation performance
 benchmark suite, based on nofib, I've gone and combined nofib with the
 great language shootout programs, and rewritten the build system to
 support cross implementation measurements.
   
 Great work!
 
 ..but I wonder if the shootout is really the kind of code that is ideal
 for compiler benchmark.  Typically (at least based on what I've seen
 of the submissions) they tend to be fairly heavily tuned, using 
 optimizations
 that are a) obfuscating the code and b) tuned specifically for GHC.

They exercise the pointy end of things. Specifically, mutable arrays,
double precision math and bytestrings. Stuff we don't have tests for in
nofib, that has performed poorly in the past (till we noticed it on the
shootout..). This kind of code does get written in practice (and when it
is written, it is usually because it needs to be fast).

So I think the few that were added are useful.

More category 'real' programs could be contributed, though.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Recursion in Haskell

2007-02-19 Thread kahl
  
  For absorbing the functional style of programming (which is what you really 
  should be working on at this point),

For functional style and the reasoning attitude associated with lazy
functional programming, the following book is a good introduction:

@Book{Bird-2000,
  author = {Richard Bird},
  title = {Introduction to Functional Programming using {Haskell}},
  year = 2000,
  publisher = {Prentice-Hall}
}

This is the second edition of:

@Book{Bird-Wadler-1988,
  year = 1988,
  title = {Introduction to Functional Programming},
  publisher = {Prentice-Hall},
  author = {Richard Bird and Phil Wadler}
}


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi and multi-line support ?

2007-02-19 Thread Jules Bean

Dunric wrote:

Is it possible to write multi-line definitions in GHC interactive interpreter ? 
(like in Python interp. ?)

When I try to write f.E. if-then-else in more then 1 line I get the following 
error:
interactive:1:30: parse error (possibly incorrect indentation)
  


It would be quite hard to do this with the layout rule, since the nature 
of the layout rule is that you don't know if the definition is complete 
until you read the next line. So you don't know whether or not to 
execute it yet...


With explicit notation {}, and friends, I don't see a technical reason 
not to. I just write long definitions though.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi and multi-line support ?

2007-02-19 Thread Neil Mitchell

Hi


 Is it possible to write multi-line definitions in GHC interactive interpreter 
? (like in Python interp. ?)


As it happens, it is possible to do with WinHugs, I've just never
turned it on by default. If there is massive need I'll polish this
feature up and put it in the next builds.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] questions about core

2007-02-19 Thread Simon Peyton-Jones
| I am trying to get a deeper understanding of core's role in GHC and
| the compilation of functional languages in general.  So far I have

You can find lots of stuff here
http://hackage.haskell.org/trac/ghc/wiki/Commentary.  At the bottom is a link 
to a lot of GHC-related papers.

|   - Exactly what are the operational and denotational semantics of core?

It'd be good to have a canonical place where this was written down, I agree; 
but it's totally straightforward.  (The operational semantics, at least.)  As 
Kirsten says, the FC paper is the most up to date presentation.

|   - The headline reasons (and any other arguments that emerge) for
| having core *and* stg as separate definitions.

You can find a description of STG, which describes the differences from Core, 
in the Commentary
http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/StgSynType

 There is a lot of stuff in the Commentary!

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] GHC throws IOError on Win32 when there is no console

2007-02-19 Thread Simon Peyton-Jones
Would it be worth opening a Trac bug report or feature request for this?  
And/or documenting the unexpected behaviour; perhaps here 
http://haskell.org/haskellwiki/GHC/GUI_programming ?

Simon

| -Original Message-
| From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Duncan
| Coutts
| Sent: 10 February 2007 11:57
| To: John Ky
| Cc: Haskell Cafe
| Subject: Re: [Haskell-cafe] GHC throws IOError on Win32 when there is no 
console
|
| On Sat, 2007-02-10 at 09:32 +1100, John Ky wrote:
|  Hi,
| 
|  I noticed on Windows that when I use IO functions that write to stdout
|  when the process is lacking a console, those functions throw an
|  IOError.  I'm not sure if this also occurs for stderr because I
|  haven't tried it.
| 
|  Some classes of processes are created without a console because they
|  never interact with the user and include System services.  Crashing
|  with IOError in this case is difficult to diagnose because because the
|  only symptom is the process crashes with no visible output.
| 
|  I believe the most sensible behaviour should be for those functions to
|  not throw, but instead do nothing.
|
| I brought this up some time ago with the GHC developers since it is a
| problem for Windows users who use Gtk2Hs and want to hide the console
| window. They felt that since this was the default system behaviour that
| it's best for GHC not to override that. C programs apparently suffer
| from the same problem (possibly depending on the variant of the C lib).
|
| The situation is much improved from when I originally complained about
| this. It does now at least display an message box with the error rather
| than silently terminating.
|
| If there is a reliable way to 'fix' the standard handles (as Paul Moore
| suggests) when running without a console then I could include that code
| into the Gtk2Hs startup and then make -optl-mwindows the default for
| progs compiled with Gtk2Hs.
|
| Duncan
|
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Why do I have to specify (Monad m) here again?

2007-02-19 Thread David House

On 18/02/07, Marc Weber [EMAIL PROTECTED] wrote:

Do I still miss a point?


I think Yitzchak's explanation of this was pretty good, so I recommend
you check that out. You should also make sure you read Sebastian's
argument, whose line of thought is similar to the one I'm going to try
to develop. Suppose we write the print function:

print x = putStrLn (show x)

Now, we want it to be able to act on as many types as possible, so we
write the type:

print :: a - IO ()
print x = putStrLn (show x)

But that's not quite right; we apply x to the 'show' function, so x
must be of a type that instantiates Show. However, our ever-hopeful
compiler writers decided that if we wrote something like:

print :: a - IO ()

But the compiler inferred the type:

print :: Show a = a - IO ()

Then it'd fill in the extra constraint and let everything work. This
is essentially your argument (although it applies to instances instead
of functions; I'll make the link at the end). I say this is a bad
idea, basically because the actual type of print is different to the
type we wrote down. Imagine that print is in a library and a user's
browsing the source of this library. They see the print function and
try to use it on something that doesn't satisfy the Show constraint.
The compiler bombs out with 'Cannot find Show instance' and the user
is confused. The type mentioned in the source file didn't mention any
Show instance, why should I need to provide one? Admittedly, in this
simple example, the user would probably see the show function being
used and realise what's going on, but this wouldn't happen in more
complex cases.

So the problem is, essentially, that the type written in the source
file is incorrect, but the compiler accepts it anyway. This is just
confusing.

Secondly, Explicit writing of types in Haskell is completely optional
-- if you want, the compiler will infer everything for you, so why do
we bother at all? Well, imagine we wrote:

print :: Show a = a - IO ()
print x = show x

If we hadn't written that type signature, then the compiler would
infer a type and accept the program. However, as we _did_ provide a
type signature, it acts a little like a spec or QuickCheck property
and reveals a typo straight away: we missed out the putStrLn.
Similarly, if we write the type:

print :: a - IO ()

Then the attitude taken by the language designers dictates that that's
what you mean. If you then try to use show, you presumably made a
mistake writing the type. The compiler could correct this mistake, but
would we want it to? Perhaps this small error is indicative of a
larger conceptual error we made; perhaps our spec says that print
should indeed have the type we wrote, and the mistake was in using
show.

So we've determined that if we provide explicit types for functions,
these should match up with the type the compiler infers. Instances are
just the same. If we write a specific type, like 'm', then we mean 'm'
and not 'Monad m = m'. We could give similar examples for instances
as we did for functions above. Suppose we had:

class Monad m = Foo m where ...

instance Foo m where ...

Then someone, just seeing the instance (which may be a completely
different file to the class), may assume that there's an instance for
every type, and get a similarly confusing situation to the print
example. Or perhaps we _do_ indeed want an instance of Foo for every
type, and the constraint on the class head of Foo was the mistake,
then the compiler would accept our program, unhelpfully.

Having the compiler second-guess our mistakes is unhelpful and confusing.

HTH.

-David House, [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Genuine Eratosthenes sieve [Was: Optimization fun]

2007-02-19 Thread Melissa O'Neill

Sorry, I'm a little late to this thread, but the topic of

sieve [] = []
sieve (x:xs) = x : sieve [y | y - xs, y `mod` x /= 0]
(as posted by Rafael Almeida) is somewhat dear to my heart, as I  
wrote a paper about it last summer and sent it in to JFP.  Still  
waiting for a reply though.


Let's go back to the beginning with the classic complaint and the  
excuses...


Creighton Hogg wrote:
So a friend and I were thinking about making code faster in  
Haskell, and I was wondering if there was a way to improve the  
following method of generating the list of all prime numbers.  It  
takes about 13 seconds to run, meanwhile my friend's C version took  
0.1.


Here come the excuses, like this one from apfelmus,
While Haskell makes it possible to express very complicated  
algorithms in simple and elegant ways, you have to expect to pay a  
constant factor (roughly 2x-10x) when competing against the same  
algorithm in low-level C.

and this one from Nicolas Frisby,

I have yet to need my Haskell to perform well


Matthew Brecknell came up with something much faster, namely

primes :: [Int]
primes = 2 : filter isPrime [3,5..] where
  f x p r = x  p*p || mod x p /= 0  r
  isPrime x = foldr (f x) True primes


FWIW, another way to write this code (without needing to think about  
how fold bails early) is


primes = 2: [x | x −[3,5..], all (\p − x `mod` p  0)  
(factorsToTry x)]

  where
factorsToTry x = takeWhile (\p − p*p = x) primes

Both of these algorithms best the sieve we began with and run  
quickly, but as you can see (possibly more clearly from my  
rephrasing), this algorithm is not actually the Sieve of Eratosthenes  
-- it's actually a classic naive primes algorithm which checks a  
number for primality by trying to divide it by every prime up to its  
square root.


But that's okay, because our initial algorithm ISN'T THE REAL SIEVE  
EITHER.  Markus Fischmann hits the nail on the head when he says
The characteristics of a sieve is, that it uses the already found  
primes to generate a list of non-primes that is then removed from a  
list of candiates for primeness.


But then we get distracted by a discussion about avoiding division.   
It's true that the real sieve avoids division, but it is NOT true  
that every algorithm that avoids division is the sieve.  The thread  
ends with this algorithm from Yitzchak Gale:

-- Delete the elements of the first list from the second list,
-- where both lists are assumed sorted and without repetition.
deleteOrd :: Ord a = [a] - [a] - [a]
deleteOrd xs@(x:xs') ys@(y:ys')
  | x  y   = y : deleteOrd xs  ys'
  | x  y   = deleteOrd xs' ys
  | otherwise   = deleteOrd xs' ys'
deleteOrd _ ys = ys

sieve (x:xs) = x : sieve (deleteOrd [x+x,x+x+x..] xs)
sieve _  = []

primes = sieve [2..]


Which seems reasonable, until you realize that every prime p we come  
up with is still considered by k deleteOrd filters, where k is the  
number of primes that preceeded it.


So, let's recap: the original algorithm is beautiful and simple, but  
it is NOT the actual Sieve of Eratosthenes, NOT because it uses  
modulus, but because fundamentally, at the highest level, it is a  
different algorithm.   At the risk of beating a dead horse, let's see  
why it's not the real sieve.


What makes the sieve an efficient algorithm are the details of *what*  
gets crossed off, *when*, and *how many times*. Suppose that we are  
finding the first 100 primes (i.e., 2 through 541), and have just  
discovered that 17 is prime. We will begin crossing off at 289 (i.e.,  
17 * 17) and cross off the multiples 289, 306, 323, ... , 510, 527,  
making fifteen crossings off in total. Notice that we cross off 306  
(17 * 18), even though it is a multiple of both 2 and 3 and has thus  
already been crossed off twice.  (The starting point of 17 * 17 is a  
pleasing, but actually *minor*, optimization for the *genuine* sieve.)


The algorithm is efficient because each composite number, c, gets  
crossed off f times, where f is the number of unique factors of c  
less than sqrt(c). The average value for f increases slowly, being  
less than 3 for the first 10^12 composites, and less than 4 for the  
first 10^34.


None of the algorithms we've discussed correspond to the above  
algorithm. It is not merely that they don't perform optimizations,  
such as starting at the square of the prime, or that some of then use  
a divisibility check rather than using a simple increment. Rather, at  
a fundamental level, they all work differently than the real sieve.  
Following our earlier example, after finding that 17 is prime, the  
phony algorithm will check to see if 19 is divisible by 17 (in the  
case of Yitzchak's algorithm, divisibility is checked by comparing  
against 17*2), followed by 23, 29, 31, ... , 523, 527, checking a  
total of ninety-nine numbers for divisibility by 17. In fact, even if  
it did (somehow) begin at 289, it would examine all forty-five  
numbers that are not 

Re: [Haskell-cafe] A new code search engine

2007-02-19 Thread Mathew Mills

lang:haskell seems to work just fine for me.

On 2/14/07, Adam Peacock [EMAIL PROTECTED] wrote:


On 2/14/07, Stephane Bortzmeyer [EMAIL PROTECTED] wrote:
 http://www.krugle.com/

 Unlike Google, you can specify Haskell as a language.

It is true that you can't directly specify the programming language
with Google. But you can specify the filetype, i.e. hs or lhs, with
Google.

To do this, just add `filetype:hs` to you search.

And according to my initial tests, Google still wins.

Adam
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Genuine Eratosthenes sieve [Was: Optimization fun]

2007-02-19 Thread Nicolas Frisby

You took my quote entirely out of context. In particular, you omitted
the rest of the sentence but I'm sure that day will come. My
statement was no excuse by any stretch of the imagination--I was
initially confused when I saw it in your post and then a bit offended.

The original intent of my statement was as a preface to the fact that
I enjoyed reading the discussion and appreciated everyone's
involvement. On that note, thanks for participating.

I recognize the particular excuse you intended my quote to
represent, but I think it was inappropriate to chop my quote to suit
your needs--it makes me seem short-sighted in the eyes of the
community when, ironically enough, it was actually part of a
prediction that I will need performance from my Haskell someday.

I do feel a little defamed, but I don't want to go off topic on the
list, especially for personal issues. Just try to consider the affect
it will have on others when looking for quotes in the future.

Thanks and no worries,
Nick

On 2/19/07, Melissa O'Neill [EMAIL PROTECTED] wrote:

Sorry, I'm a little late to this thread, but the topic of
 sieve [] = []
 sieve (x:xs) = x : sieve [y | y - xs, y `mod` x /= 0]
(as posted by Rafael Almeida) is somewhat dear to my heart, as I
wrote a paper about it last summer and sent it in to JFP.  Still
waiting for a reply though.

Let's go back to the beginning with the classic complaint and the
excuses...

Creighton Hogg wrote:
 So a friend and I were thinking about making code faster in
 Haskell, and I was wondering if there was a way to improve the
 following method of generating the list of all prime numbers.  It
 takes about 13 seconds to run, meanwhile my friend's C version took
 0.1.

Here come the excuses, like this one from apfelmus,
 While Haskell makes it possible to express very complicated
 algorithms in simple and elegant ways, you have to expect to pay a
 constant factor (roughly 2x-10x) when competing against the same
 algorithm in low-level C.
and this one from Nicolas Frisby,
 I have yet to need my Haskell to perform well

Matthew Brecknell came up with something much faster, namely
 primes :: [Int]
 primes = 2 : filter isPrime [3,5..] where
   f x p r = x  p*p || mod x p /= 0  r
   isPrime x = foldr (f x) True primes

FWIW, another way to write this code (without needing to think about
how fold bails early) is

primes = 2: [x | x −[3,5..], all (\p − x `mod` p  0)
(factorsToTry x)]
   where
 factorsToTry x = takeWhile (\p − p*p = x) primes

Both of these algorithms best the sieve we began with and run
quickly, but as you can see (possibly more clearly from my
rephrasing), this algorithm is not actually the Sieve of Eratosthenes
-- it's actually a classic naive primes algorithm which checks a
number for primality by trying to divide it by every prime up to its
square root.

But that's okay, because our initial algorithm ISN'T THE REAL SIEVE
EITHER.  Markus Fischmann hits the nail on the head when he says
 The characteristics of a sieve is, that it uses the already found
 primes to generate a list of non-primes that is then removed from a
 list of candiates for primeness.

But then we get distracted by a discussion about avoiding division.
It's true that the real sieve avoids division, but it is NOT true
that every algorithm that avoids division is the sieve.  The thread
ends with this algorithm from Yitzchak Gale:
 -- Delete the elements of the first list from the second list,
 -- where both lists are assumed sorted and without repetition.
 deleteOrd :: Ord a = [a] - [a] - [a]
 deleteOrd xs@(x:xs') ys@(y:ys')
   | x  y   = y : deleteOrd xs  ys'
   | x  y   = deleteOrd xs' ys
   | otherwise   = deleteOrd xs' ys'
 deleteOrd _ ys = ys

 sieve (x:xs) = x : sieve (deleteOrd [x+x,x+x+x..] xs)
 sieve _  = []

 primes = sieve [2..]

Which seems reasonable, until you realize that every prime p we come
up with is still considered by k deleteOrd filters, where k is the
number of primes that preceeded it.

So, let's recap: the original algorithm is beautiful and simple, but
it is NOT the actual Sieve of Eratosthenes, NOT because it uses
modulus, but because fundamentally, at the highest level, it is a
different algorithm.   At the risk of beating a dead horse, let's see
why it's not the real sieve.

What makes the sieve an efficient algorithm are the details of *what*
gets crossed off, *when*, and *how many times*. Suppose that we are
finding the first 100 primes (i.e., 2 through 541), and have just
discovered that 17 is prime. We will begin crossing off at 289 (i.e.,
17 * 17) and cross off the multiples 289, 306, 323, ... , 510, 527,
making fifteen crossings off in total. Notice that we cross off 306
(17 * 18), even though it is a multiple of both 2 and 3 and has thus
already been crossed off twice.  (The starting point of 17 * 17 is a
pleasing, but actually *minor*, optimization for the *genuine* sieve.)

The algorithm is efficient because each composite number, c, gets
crossed off f 

Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Matthew Naylor
Hi all,

 GHC v Hugs v Yhc v NHC v ...

 ...Hacle  Clean!

I shoved 5 of the benchmarks that Donald used through Hacle, and
compiled the outputs using version 2.1 of the Clean compiler.  Results
are below.

As for the other examples, Hacle doesn't like non-Haskell98 and
translates arbitrary-precision integers to fixed-precision ones (!)

I'm not sure how well Hacle would work with nobench because input
files must be unambiguously-typed assuming a default () at the top.
So some programs may require a little tweaking to go through.  Mind,
this was only a problem on 1 of the 5 programs I just tried...

Matt.

(Note: ignore the 65536 at the end of each Clean result -- my fault
for not compiling with the right options)

===
binarytrees (GHC)
===
stretch tree of depth 17 check: -1
131072   trees of depth 4check: -131072
32768trees of depth 6check: -32768
8192 trees of depth 8check: -8192
2048 trees of depth 10   check: -2048
512  trees of depth 12   check: -512
128  trees of depth 14   check: -128
32   trees of depth 16   check: -32
long lived tree of depth 16  check: -1

real0m3.301s
user0m3.280s
sys 0m0.016s
===
binarytrees (Clean)
===
Execution: 2.34  Garbage collection: 0.25  Total: 2.59
stretch tree of depth 17 check: -1
131072   trees of depth 4check: -131072
32768trees of depth 6check: -32768
8192 trees of depth 8check: -8192
2048 trees of depth 10   check: -2048
512  trees of depth 12   check: -512
128  trees of depth 14   check: -128
32   trees of depth 16   check: -32
long lived tree of depth 16  check: -1
65536

real0m2.691s
user0m2.592s
sys 0m0.100s
===
partial sums (GHC)
===
2.9987  (2/3)^k
3160.817621887086   k^-0.5
0.99602026  1/k(k+1)
30.31454150956248   Flint Hills
42.99523399808393   Cookson Hills
15.30901715473893   Harmonic
1.644933666848388   Riemann Zeta
0.6931469805600944  Alternating Harmonic
0.7853980633974358  Gregory

real0m4.887s
user0m4.888s
sys 0m0.000s
===
partial sums (Clean)
===
Execution: 4.41  Garbage collection: 0.05  Total: 4.46
3   (2/3)^k
3160.81762188709k^-0.5
0.9960203   1/k(k+1)
30.3145415095625Flint Hills
42.9952339980839Cookson Hills
15.3090171547389Harmonic
1.64493366684839Riemann Zeta
0.693146980560094   Alternating Harmonic
0.785398063397435   Gregory
65536

real0m4.545s
user0m4.468s
sys 0m0.076s
===
queens (GHC)
===
14200

real0m1.990s
user0m1.980s
sys 0m0.012s
===
queens (Clean)
===
Execution: 6.58  Garbage collection: 1.07  Total: 7.65
14200
65536

real0m7.921s
user0m7.656s
sys 0m0.264s
===
recursive (GHC)
===
Ack(3,9): 4093
Fib(36.0): 2.4157817e7
Tak(24,16,8): 9
Fib(3): 3
Tak(3.0,2.0,1.0): 2.0

real0m5.232s
user0m5.224s
sys 0m0.008s
===
recursive (Clean)
===
Execution: 2.40  Garbage collection: 0.00  Total: 2.40
Ack(3,9): 4093
Fib(36): 24157817
Tak(24,16,8): 9
Fib(3): 3
Tak(3,2,1): 2
65536

real0m2.403s
user0m2.400s
sys 0m0.000s
===
loop (GHC)
===
3.3335

real0m1.039s
user0m1.036s
sys 0m0.004s
===
loop (Clean)
===
Execution: 1.26  Garbage collection: 0.00  Total: 1.26
3.33
65536

real0m1.325s
user0m1.260s
sys 0m0.068s
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Stefan O'Rear
On Mon, Feb 19, 2007 at 08:12:14PM +, Matthew Naylor wrote:
 Hi all,
 
  GHC v Hugs v Yhc v NHC v ...
 
  ...Hacle  Clean!
 
 I shoved 5 of the benchmarks that Donald used through Hacle, and
 compiled the outputs using version 2.1 of the Clean compiler.  Results
 are below.

Submit a patch, it's easy!  Took me 10 minutes to add YHC support
and send it in.  (the reason my name isn't in darcs changes is because
dons' X crashed, killing darcs, irreperably corrupting _darcs, so he
had to rm -r _darcs ; darcs init)

Just edit header.mk and footer.mk in the obvious way.

 As for the other examples, Hacle doesn't like non-Haskell98 and
 translates arbitrary-precision integers to fixed-precision ones (!)

Don't worry, nobench is based on a testsuite and as such is prepared to diff
output.  (if that doesn't happen, I'd consider it a bug)

 I'm not sure how well Hacle would work with nobench because input
 files must be unambiguously-typed assuming a default () at the top.
 So some programs may require a little tweaking to go through.  Mind,
 this was only a problem on 1 of the 5 programs I just tried...

Well, he was willing to make concessions for Yhc brokenness (wrt importing
System.Environment - yhc's System doesn't export getArgs like the Report
says it should (first tangible result of nofib: the Yhc team has fixed it))

And don't worry about adding dependencies - you can remove compilers you don't
have by editing the COMPILERS = line in header.mk.

Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Genuine Eratosthenes sieve [Was: Optimization fun]

2007-02-19 Thread apfelmus
Melissa O'Neill wrote:
 FWIW, another way to write this code (without needing to think about how
 fold bails early) is
 
 primes = 2: [x | x −[3,5..], all (\p − x `mod` p  0) (factorsToTry x)]
   where
 factorsToTry x = takeWhile (\p − p*p = x) primes

Well, you certainly thought of the fact that 'all' bails early :)

 [...]
 But that's okay, because our initial algorithm ISN'T THE REAL SIEVE
 EITHER.
 [...]
 The thread ends with this algorithm from Yitzchak Gale:
 [...]
 sieve (x:xs) = x : sieve (deleteOrd [x+x,x+x+x..] xs)
 sieve _  = []

 Which seems reasonable, until you realize that every prime p
 we come up with is still considered by k deleteOrd filters,
 where k is the number of primes that preceeded it.
 [...]
 What makes the sieve an efficient algorithm are the details of *what*
 gets crossed off, *when*, and *how many times*.

Ah, thank you for that fresh slap in the face. Somehow, the name 'sieve'
clouded the fact that the point of a sieve is how to represent it as
data structure in order to avoid too many multiple crosses. Iterating
'deleteOrd' organizes the cross off in linear time whereas logarithmic
time is clearly conceivable. Though I think that the `mod`-algorithm
counts as sieve, it just does a bad job at organizing the crossing off.

 The algorithm is efficient because each composite number, c, gets
 crossed off f times, where f is the number of unique factors of c less
 than sqrt(c). The average value for f increases slowly, being less than
 3 for the first 10^12 composites, and less than 4 for the first 10^34.

This analysis misses the fact that managing the cross off table
invariably needs log(#primes = c) ~ log(c) extra time for every c. I'm
not quite sure, but I think that this has to be multiplied (not added)
with f. The other algorithms don't have an extra factor, although f is
clearly larger for them.


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] LambdaVM (was Re: Lambada and connecting Haskell to a Weblogic server)

2007-02-19 Thread Daniil Elovkov

Hello Brian

I wrote you on this just some hours ago, before I had looked through
haskell-cafe! That's an interesting coincidence!

Anyway, moving my questions here.

1.
How easy is it to interoperate with Java code (call methods, pass Java
objects) for Haskell code run by LambdaVM rts? And vice versa, can
Haskell code be called in a more flexible way than calling Main.main
with String[] ?

Looking at the discussion, I see that foreign import works great. What
about foreign export?

1a.
What about types? It seems like Bool in your example is automatically
mapped to java type boolean. And in the  Adam Megacz's example, where
and how are JTree and JString defined?

2.
As far as I understand, currently, GHC doesn't allow to catch
out-of-memory exception in any way, it simpy exits. This prevents from
gracefully catching this error in Java code (JVM terminates), unlike
many other errors, like paterrn match failure, undefined, etc.
Is this error reported like a Java exception in LambdaVM rts?


2007/2/15, Brian Alliet [EMAIL PROTECTED]:

On Thu, Feb 15, 2007 at 02:53:47PM -0500, Mark T.B. Carroll wrote:
 Is it easy to create Haskell stubs (in the IO monad, presumably) for
 standard Java libraries so that your compiled-to-JVM Haskell code can
 easily use the usual Java APIs like Swing? One source of vexation for us
 is mapping between Java types and Haskell types.

Yep. Normal FFI style foreign imports work.

foreign import jvm unsafe java.lang.Character.isDigit isDigit :: Int - Bool


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Why do I have to specify (Monad m) here again?

2007-02-19 Thread Marc Weber
Hi David.

I see that its useful to add complete type signatures without letting
the compiler add stuff magically. This is why I've tried to write the final
question down under a new topic wether it would be useful to be able to write
down partial type signatures where browsing coders know that they are not
complete... (Scroll down to see my answer)
When introducing a new concept ( partially typed functions ) we don't have to 
discuss
wether type signatures are useful.

I will indicate this incomplete type signature by adding [...] at the end.

 type we wrote down. Imagine that print is in a library and a user's
 browsing the source of this library. They see the print function and
 try to use it on something that doesn't satisfy the Show constraint.
 The compiler bombs out with 'Cannot find Show instance' and the user
 is confused. 
Every haskell newbe will be confused by any error message ;)
But the haskell programmer will only be confused the first time *lol*
Then he knows how to handle it.

 print :: Show a = a - IO ()
 print x = show x
 
 If we hadn't written that type signature, then the compiler would
 infer a type and accept the program. 
Aeh. Was this really the issue talking about wether its useful to write down
type signatures or writing down incomplete signatures users not knowing them
beeing incomplete?
Talking about:
print x :: a - IO () [...]
print x = show x
the compiler would not have compiled this in any case.

However, as we _did_ provide a
 type signature, it acts a little like a spec or QuickCheck property
 and reveals a typo straight away: we missed out the putStrLn.
 Similarly, if we write the type:


 print :: a - IO ()
 
 Then the attitude taken by the language designers dictates that that's
 what you mean. If you then try to use show, you presumably made a
 mistake writing the type. The compiler could correct this mistake, but
 would we want it to? Perhaps this small error is indicative of a
 larger conceptual error we made; perhaps our spec says that print
 should indeed have the type we wrote, and the mistake was in using
 show.
 
 So we've determined that if we provide explicit types for functions,
 these should match up with the type the compiler infers. Instances are
 just the same. 
They are not. Why? As I have look at the class declaration as well when looking
at instances ..
Expressed differently:
  function implementation - function type signature
corresponds to 
  class method implementation - class type signature (1) (Monad m = ...)
or
  class method implementation - instance type signature (2) (Monad m = has 
to be repeated here)
?

When looking at types (when browsing instances (2)) I'm always looking at the 
class
type declaration (1) as well. So in my given example class (Monad m) = ... I 
would
have seen it anyway.

I could only think of one example where omitting the (Monad m) = part might be 
useful:
You have a class providing a function 
match regex subject = ...
and two implementations returning (before, match, after) or (match)

Then you might want to write
show3 :: (a,a,a)
show3 = show
te use show3 to select the right implementation.

But in this case it is more convinient to write
show3 a@(_,_,_) = show a

So I'm totally convinced that we don't need new language features..

Marc
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread Neil Mitchell

Hi


Well, he was willing to make concessions for Yhc brokenness (wrt importing
System.Environment - yhc's System doesn't export getArgs like the Report
says it should (first tangible result of nofib: the Yhc team has fixed it))


The second tangible result should be that Yhc runs faster than nhc.
Our internal testing originally showed a 20% speedup over nhc -
something seems to have gone wrong to slow down Yhc, so we are working
to fix this. Hopefully in a few days Yhc will beat nhc - just in case
anyone is drawing performance ideas from the current benchmark.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: nobench: Haskell implementaion benchmarks. GHC v Hugs v Yhc v NHC v ...

2007-02-19 Thread David House

On 19/02/07, Neil Mitchell [EMAIL PROTECTED] wrote:

The second tangible result should be that Yhc runs faster than nhc.
Our internal testing originally showed a 20% speedup over nhc -
something seems to have gone wrong to slow down Yhc, so we are working
to fix this. Hopefully in a few days Yhc will beat nhc - just in case
anyone is drawing performance ideas from the current benchmark.


Great! Nothing like a bit of competition to spur coding into action! :)

Nice work, dons.

--
-David House, [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FFI basics

2007-02-19 Thread Yitzchak Gale

Simon Peyton-Jones wrote:

Yitz, Please do make time to do this!
This is the moment, while it is still fresh in your mind.


Of course, you are correct. Thanks for the push.
I am a bit busy with work, but the information is not
lost. I'll have it up soon.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Genuine Eratosthenes sieve [Was: Optimization fun]

2007-02-19 Thread Yitzchak Gale

Hi Melissa,

I enjoyed your article. I especially like your trick
of starting with p*p.

You wrote:

Which seems reasonable, until you realize that every
prime p we come up with is still considered by
k deleteOrd filters, where k is the number of primes
that preceeded it.


It is true that I have never read Eritosthenes in the original,
but my deleteOrd algorithm was meant to reproduce
faithfully the sieve algorithm as I was taught it in grade
school.

We did not cross out any number more than once. But
we did consider each multiple of every prime,
moving on if we found it already crossed off. My algorithm
does exactly the same.

I do not deny that primes can be found more efficiently.
But I believe that my algorithm is exactly what I was
taught as the sieve. So it feels genuine to me.

A few days ago, I taught the sieve to my 6 year old
daughter, the same way I learned it. She loved it!
She is currently working on memorizing the
multiplication tables, so the sieve is intriguing to
her. I'm not sure if lazy priority queues would be
quite so intriguing, though. I hope I have not
poisoned her mind.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Recursion in Haskell

2007-02-19 Thread Benjamin Franksen
P. R. Stanley wrote:
 Chaps,
 is there another general pattern for mylen, head or tail?
 mylen [] = 0
 mylen (x:xs) = 1 + mylen (xs)
 
 head [] = error what head?
 head (x:xs) = x
 
 tail [] = error no tail
 tail (x:xs)= xs

There are of course stylistic variations possible, e.g. you can use case
instead of pattern bindings:

mylen list = case list of
  [] - 0
  (x:xs) - 1 + mylen (xs)

As you see, this moves pattern matching from the lhs to the rhs of the
equation. Another very common 'pattern' is to factor the recursion into a
generic higher order function

fold op z [] = z
fold op z (x:xs) = x `op` (fold op z xs)
 -- parentheses not strictly necessary here, added for readability

and define mylen in terms of fold

mylen = fold (+) 0

You also have the possibility to use boolean guards as in

mylen xs
  | null xs   = 0
  | otherwise = 1 + mylen (tail xs)

(Although here we use the more primitive functions (null) and (tail) which
in turn would have to be defined using pattern matching. Pattern matching
is the only way to examine data of which nothing is known other than its
definition.)

Lastly, there are cases where you want to use nested patterns. For instance,
to eliminate successive duplicates you could write

elim_dups (x:x':xs) = if x == x' then xs' else x:xs'
  where xs' = elim_dups (x':xs)
elim_dups xs = xs

Here, the first clause matches any list with two or more elements; the
pattern binds the first element to the variable (x), the second one to
(x'), and the tail to (xs). The second clause matches everything else, i.e.
empty and one-element lists and acts as identity on them.

 This pattern matching reminds me of a module on formal spec I studied
 at college.

As long as your code doesn't (have to) use a tricky algorithm (typically if
the algorithm is more or less determined by the data structure, as in the
above examples) then it is really like an executable specification.

Cheers
Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Recursion in Haskell

2007-02-19 Thread Matthew Brecknell
P. R. Stanley:
 is there another general pattern for mylen, head or tail?
 mylen [] = 0
 mylen (x:xs) = 1 + mylen (xs)
 
 head [] = error what head?
 head (x:xs) = x
 
 tail [] = error no tail
 tail (x:xs)= xs

Benjamin Franksen:
 Another very common 'pattern' is to factor the recursion into a
 generic higher order function
 
 fold op z [] = z
 fold op z (x:xs) = x `op` (fold op z xs)
  -- parentheses not strictly necessary here, added for readability
 
 and define mylen in terms of fold
 
 mylen = fold (+) 0

Looks more like a sum than a length. Benjamin really meant something
like this:

mylen = fold (const succ) 0

Note, the Prelude already contains various fold functions, so you don't
have to write your own. The one corresponding to fold is foldr.
(Benjamin just wrote it out to demonstrate the practice of extracting
and naming reusable computational structures as higher-order functions).

Now, can you write head as a fold? Would you want to? What about tail?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: Derangement version 0.1.0

2007-02-19 Thread dgriffi3
This is the inital version of a very small library, mostly using the functional graph library, to find a derangement of a set, or rather a list. 
A derangement of a set is a permutation with no fixed points, like many constrained matching problems it is susceptible to solution via a Max-flow algorithm.


The source and its darcs repository is available at 
http://www.acm.uiuc.edu/~dgriffi3/projects/derangement/

Of course questions, comments and patches are welcome.

--
Dennis Griffith, [EMAIL PROTECTED] Treasurer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Genuine Eratosthenes sieve [Was: Optimization fun]

2007-02-19 Thread Melissa O'Neill

apfelmus wrote:
I think that the `mod`-algorithm counts as sieve, it just does a  
bad job at organizing the crossing off.


I'd say that it might count as a sieve, but *algorithmically* it is  
not The Sieve of Eratosthenes.  If you abstract away the  
algorithmic details to a mathematical essence, you can argue they're  
the same, but I'd argue that the algorithmic details are what make  
people want to use the Sieve of Eratosthenes in the first place.


Algorithmically, when you say remove all the multiples of 17 it  
really matters *how* you do it (c.f., an abstract mathematical  
perspective, where we might not care).  I'd argue that Eratosthenes  
did say how to do it, and doing it a different way is misleading  
enough that we should NOT call the resulting code The Sieve of  
Eratosthenes.


Yitzchak Gale:
We did not cross out any number more than once. But we did consider  
each multiple of every prime, moving on if we found it already  
crossed off. My algorithm does exactly the same.


Actually it doesn't.  Every composite gets handled by a stack of  
deleteOrd filters, each one trying to remove multiples of one prime.   
Whether you write

  sieve (x:xs) = x : sieve (deleteOrd [x+x,x+x+x..] xs)
or
  sieve (x:xs) = x : sieve (filter (\c -  c `mod` x  0) xs)
you're essentially doing the same amount of work.  Both ``deleteOrd [x 
+x,x+x+x..]'' and ``filter (\c -  c `mod` x  0)'' do exactly the  
same job -- your version goes faster because it avoids division, but  
that's all.


As I showed in my earlier post, with the worked example of removing  
all the multiples of 17,
  - Eratosthenes's sieve never says the equivalent of, Hmm, I  
wonder if 19 is a multiple of 17 -- but both the basic sieve we  
began with and the deleteOrd version do
  - Eratosthenes's sieve crosses-off/looks-at 459 (i.e., 17 * 27),  
even though we crossed it off already when we crossed off multiples  
of 3 -- whether you call that crossing it off a second time, or  
merely stepping over it, it still needs to alight on 459 to get to  
493 (17 * 29), which hasn't been crossed off yet


The first way of crossing off multiples of 17 is inefficient -- it  
checks each composite against all the primes up to its smallest  
factor.  That is exactly what the classic naive primes algorithm  
does.  The second method is efficient -- it touches each composite  
once for each unique factor it has.


Of course, the first is easy to implement as a one-liner and the  
second isn't, but (sadly), that doesn't make the first way right and  
the second way wrong.


Yitzchak Gale also wrote:
It is true that I have never read Eritosthenes in the original, but  
my deleteOrd algorithm was meant to reproduce faithfully the sieve  
algorithm as I was taught it in grade school.


I don't know which version you learned in grade school.  If you  
learned it as a mathematical abstraction (e.g., as a way to define  
what the primes are), you have faithfully reproduced that  
abstraction.  If you learned it as a technique for people to use to  
efficiently produce lists of primes, you have *not* reproduced it,  
because I'll bet in grade school you never checked 19 for  
divisibility by 17.


The bait and switch occurs when people remember only the mathematical  
abstraction and then look for elegant ways to recreate (only) that  
abstraction.  If you do that in a way that is fundamentally different  
at an algorithmic level, it shouldn't come as a surprise when that  
implementation doesn't run as efficiently as the original algorithm.   
You also shouldn't be surprised when someone complains that you  
haven't really implemented that original algorithm.


A few days ago, I taught the sieve to my 6 year old daughter, the  
same way I learned it. She loved it!


Unless you broke out the set notation with your six year old, you  
probably explained it operationally.  Thus you explained the  
algorithm, not a mathematical abstraction drawn from the algorithm.   
As such I'm sure you taught her the real thing.


And if you want to teach her cool Haskell one-liners, the sieve  
example that began this thread is delightfully brief.  But don't  
expect it to run quickly.  And you'd be doing her a disservice and  
propagating confusion if you tell her that it's algorithmically the  
same as the genuine Sieve of Eratosthenes.


Melissa.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Does laziness make big difference?

2007-02-19 Thread Nick

apfelmus,

Cool! I really like your examples! Thank you.

Nick.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] our worst unsafePerformIO nightmares are upon us!

2007-02-19 Thread Clifford Beshers

Nicolas Frisby wrote:

http://www.thinkgeek.com/geektoys/cubegoodies/86b8/

Now you can really show your coders why unsafePerformIO is to be avoided! 


On the contrary, seems like a little more non-determinism would really 
make that thing into an effective weapon. ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe