Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was: Genuine Eratosthenes sieve)

2007-02-25 Thread Lennart Augustsson

OK.

Another weird thing is that much of the Haskell code seems to work  
with Integer whereas the C code uses int.  That doesn't seem fair.


-- Lennart

On Feb 25, 2007, at 02:40 , Melissa O'Neill wrote:

Someone asked if I'd include a classic C version of the Sieve in  
my comparisons.  Having done so, Lennart wrote (slightly rephrased):
How did you compare the C version with the Haskell versions? The  
Haskell programs produce the Nth prime, whereas the C code  
produces the last prime less than M.


True.  But since I have to know what M is to find the Nth prime,  
it's easy enough to ask the C code to produce the right prime.


To make the C code to what the Haskell code does you need to set  
some upper bound that is related to the prime number  
distribution.  I see no trace of this in your code.


The Haskell versions that go up to a limit do this, so I could  
easily have written code to do it -- it's not hard, but has no real  
bearing on the time complexity of the code, so I didn't bother.


You could argue that it's cheating to tell it so blatantly when to  
stop, but I hate the C code I'd found enough that I didn't really  
want to touch it any more than I had to.



A much more legitimate complaint about the comparison with the C  
code is actually on space usage.  It uses much more space than some  
of the algorithms it's competing with.  More about that in an  
upcoming message.


Melissa.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was: Genuine Eratosthenes sieve)

2007-02-25 Thread Melissa O'Neill
For those enjoying the fun with prime finding, I've updated the  
source at


http://www.cs.hmc.edu/~oneill/code/haskell-primes.zip

I've tweaked my code a little to improve its space behavior when  
finding primes up to some limit, added an up-to-limit version of the  
Naive Primes algorithm, and added Oleg's prime finding code too.


I also got a chance to look at space usage more generally.  I won't  
reproduce a table here, but the conclusions were more-or-less what  
you'd expect.  The unlimited list algorithms used O(n) space to  
find n primes (except for Runciman's algorithm, which appeared to be  
much worse), and the primes up to a limit algorithms used O(sqrt 
(n)) space to find the nth prime.


Both of these are better than the classic C algorithm, which uses O(n  
log n) space to find the nth prime.  For example, heap profiling  
shows that my own O(sqrt(n)) algorithm uses only 91200 bytes to find  
the 10^7th prime, whereas the classic C algorithm needs at least  
11214043 bytes for its array -- a factor of more than 100 different,  
and one that gets worse for larger n.


Lennart Augustsson wrote:
Another weird thing is that much of the Haskell code seems to work  
with Integer whereas the C code uses int.


Originally, I was comparing Haskell with Haskell, and for that  
purpose I wanted to have a level playing field, so going with Integer  
everywhere made sense.



That doesn't seem fair.


Actually, to the extent that any of the comparisons are fair, I  
think this one is too.  After all, typical Haskell code uses Integer  
and typical C code uses int.  I could use arrays in my Haskell code  
and never use laziness, but when I program in Haskell, I'm not trying  
to exactly recreate C programs, but rather write their Haskell  
equivalents.  For example, to me, producing a lazy list was essential  
for a true Haskell feel.  For some people, the Haskell feel also  
includes treating the language as a declarative specification  
language where brevity is everything -- but for me, other things  
(like fundamental algorithmic efficiency and faithfulness to the core  
ideas that make the Sieve of Eratosthenes an *efficient* algorithm)  
are universal and ought to be common to both C and Haskell versions.


But to allow a better comparison with C, I've added a run for an Int  
version of my algorithm.  With that change, my code is closer to the  
speed of the C code.  More interestingly, for larger n, I seem to be  
narrowing the gap.  At 10^6, my code runs nearly 30 times slower than  
the classic C version, but at 10^8, I'm only about 20 times slower.   
This is especially interesting to me there was some (reasonable  
looking) speculation from apfelmus several days ago, that suggested  
that my use of a priority queue incurred an extra log(n) overhead,  
from which you would expect a worse asymptotic complexity, not  
equivalent or better.


Melissa.

Enc. (best viewed with a fixed-width font)

   --
 Time (in seconds) for Number of Primes
 
   Algorithm 10^310^4 10^5 10^6 10^7 10^8
   --
   C-Sieve   0.00  0.00 0.01 0.29  5.1288.24
   O'Neill (#3)  0.01  0.04 0.55 8.34122.62  1779.18
   O'Neill (#2)  0.01  0.06 0.9513.85194.96  2699.61
   O'Neill (#1)  0.01  0.07 1.0715.95230.11 -
   Bromage   0.02  0.39 6.50   142.85 - -
   sieve (#3)  0.01  0.25 7.28   213.19 - -
   Naive (#2)0.02  0.5914.70   386.40 - -
   Naive (#1)0.32  0.6616.04   419.22 - -
   Runciman  0.02  0.7429.25- - -
   Reinke0.04  1.2141.00- - -
   Zilibowitz0.02  2.50   368.33- - -
   Gale (#1) 0.12 17.99-- - -
   sieve (#1)  0.16 32.59-- - -
   sieve (#2)  0.01 32.76-- - -
   Oleg  0.18 68.40-- - -
   Gale (#2) 1.36268.65-- - -
   --

- The dashes in the table mean I gave up waiting (i.e.,  500 seconds)
- sieve (#1) is the classic example we're all familiar with
- sieve (#2) is the classic example, but sieving a list without  
multiples of 2,3,5, or 7 -- notice how it makes no real difference
- sieve (#3) is the classic example, but generating a lazy-but- 
finite list (see below)
- O'Neill (#1) is basically the algorithm of mine discussed in http:// 
www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf, with a few minor tweaks
- O'Neill (#2) is a variant of that algorithm that generates a lazy- 

Re: [Haskell-cafe] Illegal .... - solved - style question

2007-02-25 Thread Marc Weber
On Sun, Feb 25, 2007 at 12:18:25AM +0300, Bulat Ziganshin wrote:
 Hello Marc,
 
 Friday, February 23, 2007, 5:22:12 PM, you wrote:
 
  type ActionMonad a l = forall l. (HOccurs D1 l)
 = ( ReaderT l IO a )
 
 'l' should be either parameter of type constructor or forall'ed
 variable. it seems that you try to set limitations on type constructor
 parameter - thing that has another syntax and anyway not much support
 in haskell'98. i suggest you to use smth like the following instead:
 
  type ActionMonad a l = ( ReaderT l IO a )
 
  instance (HOccurs D1 l) = Get CR (ActionMonad Bool ()) where
get (CR a) = a
 
 -- 
 Best regards,
  Bulatmailto:[EMAIL PROTECTED]
Hi Bulat (and others)
Thanks for your answer. I've tried doing this:
a)
The idea of using get instead of record accessing functions a1, a2 is
overloading:
(  let a = a1 record
   vs
   let (A a) - get record
   let (A a) - get record2 (record2 can be different type than record)
)
benefit of the snd version: You can also see its type immideately.
Because I've taught DrIft to derive Get a b .. its not much additional
work.

b)
The snd idea is using an invironment which is typesafe and easy to
extend by using a HList and hOccurs to get the environment.
I think this is great because you don't have to write getStateX,
getEnvY = accessor . ask/get

Im curious about reading you comments on a) b) ;)

Happily
Marc Weber

= testfile - needs HList and GHC =
{-# OPTIONS -fglasgow-exts #-}
{-# OPTIONS -fallow-undecidable-instances #-}
{-# OPTIONS -fallow-incoherent-instances #-}
{-# OPTIONS -fallow-overlapping-instances #-}
{-# OPTIONS -fno-monomorphism-restriction #-}
module Main where
import HList hiding ( liftIO )
import Control.Monad.Reader
import Control.Monad.Trans
import HOccurs

class Get a b where
  get :: a - b

data D1 = D1 Int deriving (Show) -- dummy type
data D2 = D2 Int deriving (Show) -- dummy type

type ActionMonad l a = ReaderT l IO a

newtype A l a = A (ActionMonad l a)
newtype B l a = B (ActionMonad l a)

data ActionRecord l a = ActionRecord { a1 :: A l a
 , a2 :: B l a
 }
instance Get (ActionRecord l a) (A l a)
  where get ar = a1 ar
instance Get (ActionRecord l a) (B l a)
  where get ar = a2 ar

type RD1 = (HCons D1 HNil)
type RD2D1 = (HCons D2 RD1)
myActionRecord = ActionRecord (A act1) (B act2)
  where 
act1 :: ReaderT RD1 IO ()
act1 = do 
  liftIO $ print act 1
  (d1@(D1 _)) - asks hOccurs
  liftIO $ print (show d1)
act2 :: ReaderT RD1 IO ()
act2 = do
  liftIO $ print act 2
  (d1@(D1 _)) - asks hOccurs
  liftIO $ print (show d1)
  mytrans act3 $ (\l - HCons (D2 7) l) -- adding new environment (D2 
7) here
  -- addD2 act3 (D2 2)
mytrans f tr = do a - ask
  lift $ runReaderT f (tr a)

-- here order doesn't matter:
act3 :: ( HOccurs D1 (HCons a b)
, HOccurs D2 (HCons a b)) = 
ReaderT (HCons a b) IO ()
act3 = do
  liftIO $ print act 3 within act2
  (d2@(D2 _)) - asks hOccurs
  liftIO $ print (show d2)
-- asks' :: (HOccurs l D2) = (l - D2) - ReaderT l IO D2
-- asks' = asks
-- addD2 :: (HOccurs l' D2, HOccurs l D1) = ReaderT l' IO a - D2 - 
ReaderT l IO a
-- addD2 m d2 = mytrans m (\l - HCons d2 l)

hcons2 :: a - b - HCons a (HCons b HNil)
hcons2 a b = HCons a (HCons b HNil)

main = let 
  (A act1 :: A RD1 ()) = get myActionRecord
  (B act2 :: B RD1 ()) = get myActionRecord
  in do runReaderT (sequence [act1, act2]) (HCons (D1 1) HNil)
runReaderT act3 (hcons2 (D2 7) (D1 1))
-- order doen't matter:
runReaderT act3 (hcons2 (D1 3) (D2 7))
= testfile ===
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] R wrapper in haskell

2007-02-25 Thread Peng Zhang

Hi folks,

My primary language is R, which is an imperative functional language. I
start to learn haskell and try to use it instead when a project is
time-consuming and it takes months in R. I like the language very much
so far, but I do miss some important functions in R which can generate
random numbers from all kinds of distributions. R provides a standalone
library in c which contains all these functions and I try to write a
wrapper for haskell. Since I am very new to haskell, I want to hear some
suggestions to confirm that I am on the right track.

The functions are the following:
void set_seed(unsigned int, unsigned int);
void get_seed(unsigned int *, unsigned int *);
they can set or get seed for random number generator.

For any distribution, we have four functions for it
-- Normal distribution
double dnorm4(double x, double mu, double sigma, int give_log)
double pnorm5(double x, double mu, double sigma, int lower_tail,int
log_p)
double qnorm5(double p, double mu, double sigma, int lower_tail, int
log_p)
double rnorm(double mu, double sigma)
where dnorm calculates density, pnorm calculates p-value, qnorm
calculates quantile, and rnorm generates normal random variables.

dnorm, pnorm, qnorm are easy since they don't have side-effect. I think
I can just use the following:
foreign import ccall dnorm4 dnorm :: Double - Double - Double- Int
- Double
foreign import ccall pnorm5 pnorm :: Double - Double - Double- Int
- Int - Double
foreign import ccall qnorm5 qnorm :: Double - Double - Double- Int
- Int - Double
(Should I use CDouble or CInt here?)

But for rnorm, if I use foreign import ccall rnorm :: Double - Double
- IO Double, then my main function should carry IO monad all the time.
Maybe the better way is that I should encapsulate it into a state
monad rng - (rng, randomnumber). Therefore for rnorm, I should first
write a wrapper in c like
seed2 rnorm_wrapper(seed1, parameters){
  set_seed(seed1);
  rnorm(parameters);
  get_seed(seed2);
}
and then write another wrapper for rnorm_wrapper in haskell.

Can somebody tell me if this is the right approach? Thank you!

Best,
Peng
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to build a python module using Haskell?

2007-02-25 Thread Albert Lee

I have found some docs about build Windows DLL using Haskell and  then using
ctypes from python.
But  how to do this on Linux/FreeBSD ?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was:Genuine Eratosthenes sieve)

2007-02-25 Thread Claus Reinke

Algorithm 10^310^4 10^5 10^6 10^7 10^8
 Reinke0.04  1.2141.00- - -
- Reinke is the ``applyAt'' algorithm Claus Reinke posted here


while I'm not unhappy that this code is performing reasonably well, it was 
mostly an attempt to get closer to what some perceived as the original sieve 
specification, with some minor modifications. for performance comparisons, 
that has the huge disadvantage of having to lug around an increasing ballast
of crossed-out numbers. with a straightforward run-length encoding of those 
zeroes, the code is likely to scale rather better (one dash less, and closer to

Naive;-).

there are also some obvious improvements to the folklore sieve that bring it
closer (in spirit, and partially in performance) to the faster versions. 


attached is the code for my own experiments, for your entertainment, where

   folklore: the folklore sieve
   folklore2: without mod, starting (sieve p) from p*p
   folklore3: merging the sieves, instead of stacking them as implicit thunks

   genuine: the applyAt version, dubbed Reinke
   genuineRLC: with run-length coding for all those zeroed-out positions

   dual: the not so naive, fairly fast one; for reference (speed limit for 
above)


btw, given that the naive algorithm performs so well, perhaps it isn't all that naive 
after all? it does seem to encode the observed effects of nested sieves, without 
encoding the sieves themselves and their ballast. perhaps there is a corresponding,
dual version of the wheels as well, eliminating their currently major disadvantage 
of managing the ever growing wheels in explicit form? for instance, the current

naive/dual algorithm computes the (x `mod` p) from scratch for each x, instead
of incrementally from ((x-1)`mod`p).

claus


Sieve.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to build a python module using Haskell?

2007-02-25 Thread Marc Weber
On Sun, Feb 25, 2007 at 11:33:15PM +0800, Albert Lee wrote:
I have found some docs about build Windows DLL using Haskell and  then
using ctypes from python.
But  how to do this on Linux/FreeBSD ?

There are no dlls on linux. They are windows only.
What you need on linux is either a static library (.a) or a shared
library (.so)
I haven't written dlls using haskell yet.
But all you need to know is:
How to export haskell functions using ffi. (- C interface) (how to
call haskell from C)
How to use C functions from python. I think you've already managed this
second step.
I would have to look up how to do it.

If you still get stuck somewhere post again.  

HTH 
Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] R wrapper in haskell

2007-02-25 Thread Marc Weber
 time-consuming and it takes months in R. I like the language very much
 so far, but I do miss some important functions in R which can generate
^ typo ? ;)
 But for rnorm, if I use foreign import ccall rnorm :: Double - Double
 - IO Double, then my main function should carry IO monad all the time.
 Maybe the better way is that I should encapsulate it into a state
 monad rng - (rng, randomnumber). Therefore for rnorm, I should first
 write a wrapper in c like
 seed2 rnorm_wrapper(seed1, parameters){
   set_seed(seed1);
   rnorm(parameters);
   get_seed(seed2);
 }
set_seed
rnorm
get_seed
doesn't have any side effect as long as you consider this sequence beeing
atomic. If you don't use multithreading it should work fine.  If you want to
use multithreading have a look at
http://haskell.org/haskellwiki/GHC/Concurrency, especially at 
Software Transactional Memory (STM) which is what you might need here (?)
 and then write another wrapper for rnorm_wrapper in haskell.
 
 Can somebody tell me if this is the right approach? Thank you!

Have a look at (haskell.org - library and tools - Mathematics
(http://haskell.org/haskellwiki/Libraries_and_tools/Mathematics)
library Probabilistic Functional Programming
Perhaps this lib does exactly what you need.

There is another important source of packages: 
http://hackage.haskell.org/packages/archive/pkg-list.html

Which way to go? I'm not a haskell expert so I can't tell you.
It depends on what you need and how much time you want to spent on this topic ;)

hope this helps
Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was: Genuine Eratosthenes sieve)

2007-02-25 Thread Lennart Augustsson

Here's another program you can add.  It's fairly short and efficient.

-- Lennart

import System (getArgs)

infixr :

data StreamInt = !Int : StreamInt

(!) :: StreamInt - Int - Int
(x : _)  ! 0 = x
(_ : xs) ! n = xs ! (n-1)

-- By replacing lprimes on the next line by '5 : gen 7 4 2' this  
algorithm

-- runs in very little space, but is somewhat slower.
primes = 2 : 3 : lprimes
  where isPrime (p:ps) n = n `rem` p /= 0  (p*p  n || isPrime ps n)
lprimes = 5 : gen 7 4 2
gen n a b = if isPrime lprimes n then n : gen (n+a) b a  
else gen (n+a) b a


printNthPrime n = print (n, primes ! (n-1))

main = do
args - getArgs
printNthPrime $ read $ head args



On Feb 25, 2007, at 12:51 , Melissa O'Neill wrote:

For those enjoying the fun with prime finding, I've updated the  
source at


http://www.cs.hmc.edu/~oneill/code/haskell-primes.zip

I've tweaked my code a little to improve its space behavior when  
finding primes up to some limit, added an up-to-limit version of  
the Naive Primes algorithm, and added Oleg's prime finding code too.


I also got a chance to look at space usage more generally.  I won't  
reproduce a table here, but the conclusions were more-or-less what  
you'd expect.  The unlimited list algorithms used O(n) space to  
find n primes (except for Runciman's algorithm, which appeared to  
be much worse), and the primes up to a limit algorithms used O 
(sqrt(n)) space to find the nth prime.


Both of these are better than the classic C algorithm, which uses O 
(n log n) space to find the nth prime.  For example, heap profiling  
shows that my own O(sqrt(n)) algorithm uses only 91200 bytes to  
find the 10^7th prime, whereas the classic C algorithm needs at  
least 11214043 bytes for its array -- a factor of more than 100  
different, and one that gets worse for larger n.


Lennart Augustsson wrote:
Another weird thing is that much of the Haskell code seems to work  
with Integer whereas the C code uses int.


Originally, I was comparing Haskell with Haskell, and for that  
purpose I wanted to have a level playing field, so going with  
Integer everywhere made sense.



That doesn't seem fair.


Actually, to the extent that any of the comparisons are fair, I  
think this one is too.  After all, typical Haskell code uses  
Integer and typical C code uses int.  I could use arrays in my  
Haskell code and never use laziness, but when I program in Haskell,  
I'm not trying to exactly recreate C programs, but rather write  
their Haskell equivalents.  For example, to me, producing a lazy  
list was essential for a true Haskell feel.  For some people, the  
Haskell feel also includes treating the language as a declarative  
specification language where brevity is everything -- but for me,  
other things (like fundamental algorithmic efficiency and  
faithfulness to the core ideas that make the Sieve of Eratosthenes  
an *efficient* algorithm) are universal and ought to be common to  
both C and Haskell versions.


But to allow a better comparison with C, I've added a run for an  
Int version of my algorithm.  With that change, my code is closer  
to the speed of the C code.  More interestingly, for larger n, I  
seem to be narrowing the gap.  At 10^6, my code runs nearly 30  
times slower than the classic C version, but at 10^8, I'm only  
about 20 times slower.  This is especially interesting to me there  
was some (reasonable looking) speculation from apfelmus several  
days ago, that suggested that my use of a priority queue incurred  
an extra log(n) overhead, from which you would expect a worse  
asymptotic complexity, not equivalent or better.


Melissa.

Enc. (best viewed with a fixed-width font)

   --
 Time (in seconds) for Number of Primes
 
   Algorithm 10^310^4 10^5 10^6 10^7 10^8
   --
   C-Sieve   0.00  0.00 0.01 0.29  5.1288.24
   O'Neill (#3)  0.01  0.04 0.55 8.34122.62  1779.18
   O'Neill (#2)  0.01  0.06 0.9513.85194.96  2699.61
   O'Neill (#1)  0.01  0.07 1.0715.95230.11 -
   Bromage   0.02  0.39 6.50   142.85 - -
   sieve (#3)  0.01  0.25 7.28   213.19 - -
   Naive (#2)0.02  0.5914.70   386.40 - -
   Naive (#1)0.32  0.6616.04   419.22 - -
   Runciman  0.02  0.7429.25- - -
   Reinke0.04  1.2141.00- - -
   Zilibowitz0.02  2.50   368.33- - -
   Gale (#1) 0.12 17.99-- - -
   sieve (#1)  0.16 32.59-- - -
   sieve (#2)  0.01 32.76-- - -
   Oleg  0.18 

Re: [Haskell-cafe] A real Haskell Cookbook

2007-02-25 Thread Chris Eidhof

Hey everyone,

we added some examples to this page. There are some topics that don't  
have any examples, notably:

# 11 Network Programming
# 12 XML
* 12.1 Parsing XML
# 13 Databases
* 13.1 MySQL
* 13.2 PostgreSQL
* 13.3 SQLite
# 14 FFI
* 14.1 How to interface with C

If anyone feels like filling up some of those sections, that would be  
great.


-chris

On 21 Feb, 2007, at 20:17 , Martin Bishop wrote:

I made a preliminary page, and fleshed out some of the headers/sub- 
headers on the wiki page for a good Haskell Cookbook (aka NOT a  
PLEAC clone).  Please contribute and/or fix the examples and  
explanations so we can make a really nice Cookbook for newbies. :)


 http://haskell.org/haskellwiki/Cookbook

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was:Genuine Eratosthenes sieve)

2007-02-25 Thread Melissa O'Neill

Claus Reinke wrote:
   folklore3: merging the sieves, instead of stacking them as  
implicit thunks


Here's Claus's code:
primes3 = 2 : folklore3 [] [3,5..]


folklore3 pns xs = x : folklore3 (insert (x,x*x) pns') xs'
  where (x,pns',xs') = sieve3 pns xs

sieve3 ((p,n):pns) (x:xs) | xn   = (x,((p,n):pns),xs)
  | otherwise = sieve3 (insert (p,(n+p))  
pns) ([x|xn]++xs)

sieve3 []  (x:xs) = (x,[],xs)

insert (p,n) []= [(p,n)]
insert (p,n) ((p',n'):pns) | n=n' = (p,n):(p',n'):pns
   | otherwise = (p',n'):insert (p,n) pns


This isn't a variant of the folklore sieve, it's actually the real  
one!  Let's take a look at ``pns'' at the 20th prime, having just  
found that 67 is prime (we're about to discover that 71 is prime):


[(3,69),(5,70),(7,70),(11,121),(13,169),(17,289),(19,361),(23,529), 
(29,841),(31,961),(37,1369),(41,1681),(43,1849),(47,2209),(53,2809), 
(59,3481),(61,3721),(67,4489)]


As you can see, 70 will be crossed off twice, once by the 5 iterator  
and once by the iterator for 7.  And we won't think about 11, 13,  
etc. until they're needed.


This algorithm is doing pretty much exactly what mine does, except  
that in mine I use a priority queue to represent this information.   
In fact, with my most recent code, I'd only store (3,69), (5,70),  
(7,70) in the priority queue and keep (11,121), (13,169), ...,  
(67,4489) in a feeder queue so that I only use the power of the  
priority queue when I need it.


Melissa.

P.S. Mine actually wouldn't have quite the same numbers, because I go  
to a bit of trouble to skip numbers like 70 which will never actually  
show up in the x:xs list.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] OO Design in Haskell Example (Draft)

2007-02-25 Thread Steve Downey

In the last OO design in Haskell thread (and probably in every one
preceeding it), it was suggested that having some examples might be a good
idea.

Since most people with existing designs will have some familiarity with
Design Patterns, and those are typical building blocks for OO designs, it
occured to me that implementing some of them might be a useful excersize. If
for nothing other than learning some more Haskell.

Now, some of them are probably bad ideas for implementing in Haskell.
There's a better, or more natural, way than what is suggested by the design
pattern. Visitor is probably not a good pattern to follow, for example. On
the other hand, others may still be useful, even in a functional language.

So, I've been working on a Composite example. I've used existential types to
have a generic proxy to the base type, rather than a simple algebraic type,
since adding new leaves to the algebraic type means modifying the whole
type, a violation of the Open-Closed principle (open for extension, closed
for modification)

The interface of the composite. Two methods, add and draw.


class IComponent e where
draw ::e - String
add :: (IComponent e') = e - e' - Component



A proxy type which can hold any kind of component, and provides the
'virtual' dispatch implementation. That is, it forwards to the add
or draw implementation of the instance it is proxying for.


data Component =
forall e.(IComponent e) = Component e



componentDraw :: Component - String
componentDraw (Component c) = draw c



componentAdd :: (IComponent e) = Component - e - Component
componentAdd (Component e) a  = Component (add e a)



instance IComponent Component where
draw = componentDraw
add = componentAdd



The Single type, which is the leaf node in this composite, add is a
no-op, except for wrapping the value in a Component. Since there
isn't an implicit down cast from the 'derived' Single to the 'base'
Component.


data Leaf =
Text String
deriving(Show, Eq)



leafDraw :: Leaf - String
leafDraw (Text s) = show s



leafAdd :: (IComponent e) = Leaf - e - Component
leafAdd s _  = Component s



instance IComponent Leaf where
draw = leafDraw
add = leafAdd




The Composite type, which holds a list of Components through the
composite proxy. I was tempted to make the list a state variable,
so that add could modify rather than produce a new Many, but I
wanted to get the basics working.


data Composite =
Many [Component]



compositeDraw :: Composite - String
compositeDraw (Many [])  = ()
compositeDraw (Many leaves)  = ( ++ (foldr1 (++) $ map draw leaves) ++

)


compositeAdd :: (IComponent e) = Composite - e - Component
compositeAdd (Many leaves) c =
Component $ Many ((Component c) : leaves)



instance IComponent Composite where
draw = compositeDraw
add = compositeAdd

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Memoisation

2007-02-25 Thread Tony Morris
I have a backtracking algorithm that I need to memoise with. Rather than
go into the intricacies of the algorithm, I figure (and hope) the
factorial function is trivial enough to point out my problem.

Simply, suppose I wish to calculate the factorial of 10, then later
the factorial of 5. I have already calculated the factorial of 5, but
now I must do it again. I have thought of various ways of preventing
this; perhaps passing an Array in a state monad. I'm wondering if there
is a general solution for this kind of problem.

Thanks for any tips.

-- 
Tony Morris
http://tmorris.net/




signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Memoisation

2007-02-25 Thread Bryan Burgers

On 2/25/07, Tony Morris [EMAIL PROTECTED] wrote:

I have a backtracking algorithm that I need to memoise with. Rather than
go into the intricacies of the algorithm, I figure (and hope) the
factorial function is trivial enough to point out my problem.

Simply, suppose I wish to calculate the factorial of 10, then later
the factorial of 5. I have already calculated the factorial of 5, but
now I must do it again. I have thought of various ways of preventing
this; perhaps passing an Array in a state monad. I'm wondering if there
is a general solution for this kind of problem.

Thanks for any tips.

--
Tony Morris


You may be able to glean some ideas from a previous discussion at:
http://comments.gmane.org/gmane.comp.lang.haskell.cafe/19623

Bryan Burgers
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] R wrapper in haskell

2007-02-25 Thread Peng Zhang

Thanks for the reply.

On 2/25/07, Marc Weber [EMAIL PROTECTED] wrote:

 time-consuming and it takes months in R. I like the language very much
 so far, but I do miss some important functions in R which can generate
^ typo ? ;)
 But for rnorm, if I use foreign import ccall rnorm :: Double - Double
 - IO Double, then my main function should carry IO monad all the time.
 Maybe the better way is that I should encapsulate it into a state
 monad rng - (rng, randomnumber). Therefore for rnorm, I should first
 write a wrapper in c like
 seed2 rnorm_wrapper(seed1, parameters){
   set_seed(seed1);
   rnorm(parameters);
   get_seed(seed2);
 }
set_seed
rnorm
get_seed
doesn't have any side effect as long as you consider this sequence beeing


rnorm itself has side effect, right? It changes the seed for global
random generator.
That is why I think I need rnorm_wrapper to purify it.


atomic. If you don't use multithreading it should work fine.  If you want to
use multithreading have a look at
http://haskell.org/haskellwiki/GHC/Concurrency, especially at
Software Transactional Memory (STM) which is what you might need here (?)
 and then write another wrapper for rnorm_wrapper in haskell.

 Can somebody tell me if this is the right approach? Thank you!

Have a look at (haskell.org - library and tools - Mathematics
(http://haskell.org/haskellwiki/Libraries_and_tools/Mathematics)
library Probabilistic Functional Programming
Perhaps this lib does exactly what you need.


I did see this, but I think it is not enough for me.



There is another important source of packages:
http://hackage.haskell.org/packages/archive/pkg-list.html

Which way to go? I'm not a haskell expert so I can't tell you.
It depends on what you need and how much time you want to spent on this topic ;)

hope this helps
Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Neil Mitchell

Hi

I'm starting to play with GADT's, and I was trying to write a safe
version of head and tail, on a safe version of lists. What I came up
with is:


data ConsT a
data NilT

data List a t where
Cons :: a - List a b - List a (ConsT b)
Nil  :: List a NilT

instance Show a = Show (List a t) where
show (Cons a b) = show a ++ : ++ show b
show Nil = []

safeHead :: List a (ConsT t) - a
safeHead (Cons a b) = a

safeTail :: List a (ConsT t) - List a t
safeTail (Cons a b) = b

safeMap :: (a - b) - List a t - List b t
safeMap f Nil = Nil
safeMap f (Cons a b) = Cons (f a) (safeMap f b)


Defining safeMap was trivial, but one thing I couldn't figure out was
how to write something like fromList:

fromList [] = Nil
fromList (a:as) = Cons a (fromList as)

fromList could obviously be anything such as reading from a file etc,
where the input is not known in advance. Are there any techniques for
this?

In addition I was expecting to find some example like this in one of
the papers/examples on GADT's, but I didn't. The Haskell wikibook has
a slight example along these lines, but not with the recursive field
in ConsT. [http://en.wikibooks.org/wiki/Haskell/GADT]

Any help or hints would be appreciated,

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Stefan O'Rear
On Sun, Feb 25, 2007 at 10:40:13PM +, Neil Mitchell wrote:
 Hi
 
 I'm starting to play with GADT's, and I was trying to write a safe
 version of head and tail, on a safe version of lists. What I came up
 with is:
 
[code moved later]

 Defining safeMap was trivial, but one thing I couldn't figure out was
 how to write something like fromList:
 
 fromList [] = Nil
 fromList (a:as) = Cons a (fromList as)
 
 fromList could obviously be anything such as reading from a file etc,
 where the input is not known in advance. Are there any techniques for
 this?
 
 In addition I was expecting to find some example like this in one of
 the papers/examples on GADT's, but I didn't. The Haskell wikibook has
 a slight example along these lines, but not with the recursive field
 in ConsT. [http://en.wikibooks.org/wiki/Haskell/GADT]

Since we don't know in advance the length of the list, we need an
existensial type; this immediately causes problems because existential
types do not exist in any current Haskell implementation.  There are
two approaches:

\begin{code}
data ConsT a
data NilT

data List a t where
Cons :: a - List a b - List a (ConsT b)
Nil  :: List a NilT

instance Show a = Show (List a t) where
show (Cons a b) = show a ++ : ++ show b
show Nil = []

safeHead :: List a (ConsT t) - a
safeHead (Cons a b) = a

safeTail :: List a (ConsT t) - List a t
safeTail (Cons a b) = b

safeMap :: (a - b) - List a t - List b t
safeMap f Nil = Nil
safeMap f (Cons a b) = Cons (f a) (safeMap f b)

-- Using an existential datatype box
data VarList a = forall t. VarList (List a t)

fromListV :: [a] - VarList a
fromListV [] = VarList Nil
fromListV (x:xs) = case fromListV xs of
 VarList xs' - VarList (Cons x xs')

-- using CPS transform (turns existentials into rank2)
-- generally prefered because it avoids naming a one-use data type,
-- but YMMV

fromListC :: [a] - (forall t. List a t - r) - r
fromListC [] fn = fn Nil
fromListC (x:xs) fn = fromListC xs (\sl - fn (Cons x sl))
\end{code}

HTH

Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Derek Elkins

Neil Mitchell wrote:

Hi

I'm starting to play with GADT's, and I was trying to write a safe
version of head and tail, on a safe version of lists. What I came up
with is:


data ConsT a
data NilT

data List a t where
Cons :: a - List a b - List a (ConsT b)
Nil  :: List a NilT

instance Show a = Show (List a t) where
show (Cons a b) = show a ++ : ++ show b
show Nil = []

safeHead :: List a (ConsT t) - a
safeHead (Cons a b) = a

safeTail :: List a (ConsT t) - List a t
safeTail (Cons a b) = b

safeMap :: (a - b) - List a t - List b t
safeMap f Nil = Nil
safeMap f (Cons a b) = Cons (f a) (safeMap f b)


Defining safeMap was trivial, but one thing I couldn't figure out was
how to write something like fromList:

fromList [] = Nil
fromList (a:as) = Cons a (fromList as)


You need to use existentials.

The following code hides one in its definition:
fromList :: [a] - (List a NilT - r)
 - (forall b. List a (ConsT b) - r) - r
fromList [] nilk consk = nilk Nil
fromList (x:xs) nilk consk = fromList' xs (consk . Cons x)
where fromList' :: [a] - (forall b. List a b - r) - r
  fromList' [] k = k Nil
  fromList' (x:xs) k = fromList' xs (k . Cons x)


Also it should be possible to arrange this to use a very simple library 
for existentials I made, made possible by GHC 6.6's new support for 
impredicativity.


-- exists a. F a
-- = exists a.not (not (F a))
-- = not (forall a.not (F a))
type Exists f = forall r.(forall a.(f a - r)) - r

open :: Exists f - (forall a.(f a - r)) - r
open package k = package k

pack :: f a - Exists f
pack a k = k a
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Neil Mitchell

Hi Stefan,


Since we don't know in advance the length of the list, we need an
existensial type; this immediately causes problems because existential
types do not exist in any current Haskell implementation.  There are
two approaches:




-- Using an existential datatype box
data VarList a = forall t. VarList (List a t)

fromListV :: [a] - VarList a
fromListV [] = VarList Nil
fromListV (x:xs) = case fromListV xs of
 VarList xs' - VarList (Cons x xs')

-- using CPS transform (turns existentials into rank2)
-- generally prefered because it avoids naming a one-use data type,
-- but YMMV

fromListC :: [a] - (forall t. List a t - r) - r
fromListC [] fn = fn Nil
fromListC (x:xs) fn = fromListC xs (\sl - fn (Cons x sl))


How do I get my original function back which just turns a standard
list to one of the funky lists, or is that just impossible with
GADT's? Do I now have to wrap all the fuctions I use, i.e. pass
safeMap in CPS?

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Stefan O'Rear
On Sun, Feb 25, 2007 at 11:11:14PM +, Neil Mitchell wrote:
 Hi Stefan,
 
 Since we don't know in advance the length of the list, we need an
 existensial type; this immediately causes problems because existential
 types do not exist in any current Haskell implementation.  There are
 two approaches:
 
 
 -- Using an existential datatype box
 data VarList a = forall t. VarList (List a t)
 
 fromListV :: [a] - VarList a
 fromListV [] = VarList Nil
 fromListV (x:xs) = case fromListV xs of
  VarList xs' - VarList (Cons x xs')
 
 -- using CPS transform (turns existentials into rank2)
 -- generally prefered because it avoids naming a one-use data type,
 -- but YMMV
 
 fromListC :: [a] - (forall t. List a t - r) - r
 fromListC [] fn = fn Nil
 fromListC (x:xs) fn = fromListC xs (\sl - fn (Cons x sl))
 
 How do I get my original function back which just turns a standard
 list to one of the funky lists, or is that just impossible with
 GADT's? Do I now have to wrap all the fuctions I use, i.e. pass
 safeMap in CPS?

AFAIK you can't. Fortunately the CPS transform can be very local:

-- write this as you normally would, no cps at all
do_something_with_safe_list :: List t a - (something - somethingelse ...) 
--assoc for clarity

do_something_with_unsafe_list :: [a] - (same as above)
do_something_with_unsafe_list ls = fromListC ls do_something_with_safe_list
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] OO Design in Haskell Example (Draft)

2007-02-25 Thread Tim Docker
Steve Downey wrote: 
  So, I've been working on a Composite example. I've used
  existential types to have a generic proxy to the base
  type, rather than a simple algebraic type, since adding
  new leaves to the algebraic type means modifying the whole
  type, a violation of the Open-Closed principle (open for
  extension, closed for modification)

Rather than using existential types, a simple record of
functions can be often be useful. ie:

data Component = Component {
draw :: String
add  :: Component - Component
}

It might be worth comparing this approach with the (more
complex) one you have described.

Tim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Memoisation

2007-02-25 Thread Jeremy Shaw
At Mon, 26 Feb 2007 07:54:56 +1000,
Tony Morris wrote:
 
 [1  multipart/signed (7bit)]
 [1.1  text/plain; ISO-8859-1 (quoted-printable)]
 I have a backtracking algorithm that I need to memoise with. Rather than
 go into the intricacies of the algorithm, I figure (and hope) the
 factorial function is trivial enough to point out my problem.

Have you seen the paper, Modular Lazy Search for Constraint
Satisfaction Problems by Thomas Nordin and Andrew Tolmach?

http://web.cecs.pdx.edu/~apt/jfp01.ps

It starts with simple backtracking, and then adds 'memoising' to
extend the algorithm to conflict directed backtracking, backmarking,
forward checking, dynamic variable ordering, and other fun stuff.

Although the algorithms are designed around solving constraint
satisfaction problems, is is pretty easy to apply the technique a
domain specific solver as well.

I definitely recommend reading the paper if you are doing any sort of
backtracking-type solvers.

j.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hi can u explain me how drop works in Haskell

2007-02-25 Thread iliali16

Hi I am trying to implement the function drop in haskell the thing is that I
I have been trying for some time and I came up with this code where I am
trying to do recursion:

drop :: Integer - [Integer] - [Integer]
drop 0 (x:xs) = (x:xs)
drop n (x:xs)
|n  lList (x:xs) = dropN (n-1) xs :  
|otherwise = []

So I want to understand how would this work and what exacttly should I put
as an answer on line 4 couse that is where I am lost. I know I might got the
base case wrong as well but I don't know what to think for it. I have done
the lList as a function before writing this one. Thanks to those who can
help me understand this. Thanks alot in advance! Have a nice day!
-- 
View this message in context: 
http://www.nabble.com/Hi-can-u-explain-me-how-drop-works-in-Haskell-tf3290490.html#a9152251
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders(was:Genuine Eratosthenes sieve)

2007-02-25 Thread Claus Reinke
This isn't a variant of the folklore sieve, it's actually the real one!  

:-)

well, one thing i was trying for (and what perhaps one might like to see in a 
paper), is a way of relating the various code alternatives to each other, with 
suitably similar intermediate stages. not quite as detailed as deriving one from 
the other by program transformation, and not always meaning-preserving, but 
small enough steps that it becomes easier to understand the differences, and 
how one version might grow out of another, based on which performance 
concerns and implementation decisions. your paper does some of that, but 
the steps feel more like jumps to me.


we don't usually have compiler optimisations that could effectively transform
the folklore version of the sieve into any of the other prime enumerators we've
seen. and it is nice to see sieve added to the infamous quicksort as an example 
of a well-known to be beautiful, but little-known to be inefficiently realised 
specification (Lennart provided a convincing alternative for that other one, too;) 
- they are advertised and accepted unquestioned all too often. but something in 
me balks at the idea that the folkore version of the sieve is not the sieve 
*algorithm*, in spite of your detailed cross-off footprint and performance 
curve arguments.


i'm not saying you're wrong, and as one of those who prefer operational 
semantics
over denotational semantics in many contexts, i realise that it may be seen as 
odd
if i prefer to see the high-level versions as specifications of algorithms, 
when the
implementation behaviour might differ substantially from the intended algorithm.

but we are in the grey area between permute until sorted and sort like this, 
and thinking of the modulus of a number as a static property (that might be 
derived incrementally from that of its predecessor) rather than an explicit check
for divisibility looks very similar to this is quicksort, but i didn't really mean 
(++) to do appends. if one has to use quicksort on binary lists, one better 
uses Augustsson's append-free qsort instead of the folklore version, but is that

a better implementation, or a different algorithm? does it depend on how we
look at the question?

if you could make that part of your argument in a less controversial style, without 
loaded words like real, phony, fooling, focussing more on observations than 
on any agenda (however innocent), this reader at least would find it easier to focus 
on the really interesting issues you raise for discussion. also, the present thread 
(one of many incarnations) demonstrates that there are other issues to be 
exposed and explored when discussing styles of prime sieves. not to mention 
the modern-again issue of transforming executable specifications to efficient 
implementations.


the progression 

   mod sieves (folklore) 
   -start (sieve p) at p*p

   -incremental sieves (folklore2)
   -merged incremental sieves (folklore3)
   -more efficient representation of merged sieves (your priority queue 
version)
   -other useful optimisations and tweaks that further hide the ideas (your 
fastest
   versions) ..

makes it easier for me to see how the initial program relates to the others, and the 
closeness of folklore3 to one of your versions is intentional, as are the differences.


the versions are not quite equivalent (eg folklore2/3 go wrong earlier than 
folklore when using Int instead of Integer), but if there is any difference wrt
the cross-out footprint (apart from the p*p thing), it is likely to be in the precise 
organisation of sieve merging after folklore2. otherwise, they seem to embody 
fairly natural improvement steps of the original specification-level code.


Let's take a look at ``pns'' at the 20th prime, having just  
found that 67 is prime (we're about to discover that 71 is prime):


[(3,69),(5,70),(7,70),(11,121),(13,169),(17,289),(19,361),(23,529), 
(29,841),(31,961),(37,1369),(41,1681),(43,1849),(47,2209),(53,2809), 
(59,3481),(61,3721),(67,4489)]


As you can see, 70 will be crossed off twice, once by the 5 iterator  
and once by the iterator for 7.  And we won't think about 11, 13,  
etc. until they're needed.


as it happens, even if we run folklore3 over [2..], 70 will be crossed off
exactly once, by the sieve for 2. the sieves for 5 and 7 run over the gap
left behind, as they do in folklore and folklore2 (one could move that
gap jumping from sieve3 to insert, though..). the sieves for 5 and 7 
know about 70 the same way that (`mod`5) and (`mod`7) know 
about 70, but that knowledge isn't called upon for sieving, only to find 
greater numbers with modulus 0.


in contrast, sieves in genuine will replace non-primes by zero, so later 
sieves can still run into the same position. whether we count the zero in
that position as a gap, to be left intact, or as a ghost of the non-prime, to 
be zeroed again, would seem to be a matter of viewpoint?


for me, the question is more one of vertical vs 

Re: [Haskell-cafe] Hi can u explain me how drop works in Haskell

2007-02-25 Thread Chris Eidhof

Hey,

you're almost there:

drop :: Integer - [a] - [a]
drop 0 xs = xs
drop n (x:xs) = drop (n-1) xs

Your version fails when trying to do drop 10 [1..10]. My version  
fails when trying to do drop 10 [1..9], so you might want to try to  
see if you can come up with a solution for that!


Good luck,
-chris

On 25 Feb, 2007, at 18:43 , iliali16 wrote:



Hi I am trying to implement the function drop in haskell the thing  
is that I
I have been trying for some time and I came up with this code where  
I am

trying to do recursion:

drop :: Integer - [Integer] - [Integer]
drop 0 (x:xs) = (x:xs)
drop n (x:xs)
|n  lList (x:xs) = dropN (n-1) xs :
|otherwise = []

So I want to understand how would this work and what exacttly  
should I put
as an answer on line 4 couse that is where I am lost. I know I  
might got the
base case wrong as well but I don't know what to think for it. I  
have done
the lList as a function before writing this one. Thanks to those  
who can

help me understand this. Thanks alot in advance! Have a nice day!
--
View this message in context: http://www.nabble.com/Hi-can-u- 
explain-me-how-drop-works-in-Haskell-tf3290490.html#a9152251
Sent from the Haskell - Haskell-Cafe mailing list archive at  
Nabble.com.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hi can u explain me how drop works in Haskell

2007-02-25 Thread Yitzchak Gale

Hi Iliali,

You wrote:

Hi I am trying to implement the function drop in haskell


Chris Eidhof wrote:

you're almost there...


In case this is homework, you may also want to
look at:

http://www.haskell.org/haskellwiki/Homework_help

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hi can u explain me how drop works in Haskell

2007-02-25 Thread Antonio Cangiano

On 2/25/07, iliali16 [EMAIL PROTECTED] wrote:



Hi I am trying to implement the function drop in haskell the thing is that
I
I have been trying for some time and I came up with this code where I am
trying to do recursion:

drop :: Integer - [Integer] - [Integer]
drop 0 (x:xs) = (x:xs)
drop n (x:xs)
|n  lList (x:xs) = dropN (n-1) xs :
|otherwise = []



drop :: Integer - [a] - [a]
drop n xs | n  1 =  xs
drop _ [] =  []
drop n (_:xs) =  drop (n-1) xs

Line 1: It specifies that drop will accept an Integer and a list, and return
a list;
Line 2: If n  1, the function will return the list as it is (this pattern
is matched if you're dropping 0 or -2 elements, for example);
Line 3: No matter what Integer has been passed to the function, if the list
passed is empty, an empty list will be returned as well;
Line 4: Dropping n elements from a list is equivalent to dropping n-1
elements from the tail (xs) of that same list.

HTH
Antonio
--
http://antoniocangiano.com
Zen and the Art of Ruby Programming
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders (was: Genuine Eratosthenes sieve)

2007-02-25 Thread ajb
G'day all.

This one is pretty elegant.  A Pritchard sieve is actually an
Eratosthenes sieve with the loops reversed.  Unfortunately, it's
a bit slower.  Maybe someone else can speed it up a bit.

mergeRemove :: [Integer] - [Integer] - [Integer]
mergeRemove [] ys = []
mergeRemove xs [] = xs
mergeRemove xs'@(x:xs) ys'@(y:ys)
= case compare x y of
LT - x : mergeRemove xs ys'
EQ - mergeRemove xs ys
GT - mergeRemove xs' ys

pritchardSieve :: Integer - [Integer]
pritchardSieve n
| n = 16 = takeWhile (=n) [2,3,5,7,11,13]
| otherwise = removeComposites [2..n] (sieve [2..n`div`2])
where
removeComposites ps [] = ps
removeComposites ps (cs@(c:_):css)
= removeComposites' ps
where
removeComposites' [] = []
removeComposites' (p:ps)
| p  c = p : removeComposites' ps
| otherwise = removeComposites (mergeRemove ps cs) css

pjs = pritchardSieve sn

sn = isqrt n

sieve [] = []
sieve (f:fs)
= composites pjs : sieve fs
where
composites [] = []
composites (p:ps)
| pf  n || f `mod` p == 0 = [pf]
| otherwise = pf : composites ps
where
pf = p*f

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Matthew Brecknell
Neil Mitchell wrote:
 data ConsT a
 data NilT
 
 data List a t where
 Cons :: a - List a b - List a (ConsT b)
 Nil  :: List a NilT

Stefan O'Rear wrote:
 data VarList a = forall t. VarList (List a t)
 
 fromListV :: [a] - VarList a
 fromListV [] = VarList Nil
 fromListV (x:xs) = case fromListV xs of
  VarList xs' - VarList (Cons x xs')
 
 fromListC :: [a] - (forall t. List a t - r) - r
 fromListC [] fn = fn Nil
 fromListC (x:xs) fn = fromListC xs (\sl - fn (Cons x sl))

I noticed that fromListV and fromListC always force traversal of the
input list. I made various attempts to modify them to preserve laziness,
but this always resulted in a type error. For example:

 fromListV (x:xs) = case fromListV xs of
   ~(VarList l) - VarList (Cons x l)

Couldn't match the rigid variable `a' against the rigid variable `a1'
  `a' is bound by the type signature for `fromListV'
  `a1' is bound by the pattern for `VarList' at gadt-list.hs:33:42-50
  Expected type: List a b
  Inferred type: List a1 t
In the second argument of `Cons', namely `l'
In the first argument of `VarList', namely `(Cons x l)'

I guess the strict traversal of the input list is inevitable,
considering that the concrete type of any (List a t) depends on the
length of the list (even when hidden behind an existential).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safe lists with GADT's

2007-02-25 Thread Stefan O'Rear
On Mon, Feb 26, 2007 at 03:42:56PM +1000, Matthew Brecknell wrote:
 Neil Mitchell wrote:
  data ConsT a
  data NilT
  
  data List a t where
  Cons :: a - List a b - List a (ConsT b)
  Nil  :: List a NilT
 
 Stefan O'Rear wrote:
  data VarList a = forall t. VarList (List a t)
  
  fromListV :: [a] - VarList a
  fromListV [] = VarList Nil
  fromListV (x:xs) = case fromListV xs of
   VarList xs' - VarList (Cons x xs')
  
  fromListC :: [a] - (forall t. List a t - r) - r
  fromListC [] fn = fn Nil
  fromListC (x:xs) fn = fromListC xs (\sl - fn (Cons x sl))
 
 I noticed that fromListV and fromListC always force traversal of the
 input list. I made various attempts to modify them to preserve laziness,
 but this always resulted in a type error. For example:
 
  fromListV (x:xs) = case fromListV xs of
~(VarList l) - VarList (Cons x l)
 
 Couldn't match the rigid variable `a' against the rigid variable `a1'
   `a' is bound by the type signature for `fromListV'
   `a1' is bound by the pattern for `VarList' at gadt-list.hs:33:42-50
   Expected type: List a b
   Inferred type: List a1 t
 In the second argument of `Cons', namely `l'
 In the first argument of `VarList', namely `(Cons x l)'
 
 I guess the strict traversal of the input list is inevitable,
 considering that the concrete type of any (List a t) depends on the
 length of the list (even when hidden behind an existential).
 
I can't find it now, but there was a post saying this couldn't be done
because system fc is strict in type arguments, such as bound by forall t.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for Prime Finders(was:Genuine Eratosthenes sieve)

2007-02-25 Thread Melissa O'Neill
Claus, you're absolutely right that my paper could be improved in a  
number of ways, and that there is actually a surprising amount of  
meat on these bones for further analysis.  We'll see what happens.   
If it actually gets accepted for JFP, it'll undoubtedly finally  
appear as a revised version with less controversial language.  In  
hindsight, using loaded words was a huge mistake,  because of the way  
it distracts from my main points.


But one thing I've learned is that it's better to do something  
imperfect and get it out there for people to read than to try to make  
it perfect and end up with a perpetual work in progress.  I had a  
busy summer last summer, and the paper was as good as I could make it  
in the very short amount of time I had.  It may have annoyed you in  
some ways, but hopefully it informed you in others.


Claus wrote:
something in me balks at the idea that the folkore version of the  
sieve is not the sieve *algorithm*, in spite of your detailed  
cross-off footprint and performance curve arguments.


I don't think you're alone in that viewpoint.

But it's clear to me that while some people are aware of the  
differences in crossing-off behavior, the performance costs that  
entails, and the way the ``folklore sieve'' becomes a dual of the  
very-naive-primes algorithm (trial division by all the primes less  
than n) -- and despite those differences, want to say it's all  
okay; there also are plenty of people who are unaware of these  
subtleties and believe that they have an honest-to-goodness  
functional implementation of one of the fastest prime finding methods  
out there, are hugely disappointed at the performance they get, and  
ascribe it not to poor algorithmic choices, but to the language they  
are working with.


As I allude to in my paper, I've talked to various people locally  
who've seen the ``folklore sieve'', and it's fascinating to watch the  
light go on for them, and hear exclamations like ``Oh!  So *that's*  
why it's so slow!''.  (FWIW, I think it was those reactions, which  
included for some a sense of having been duped, that lead me to feel  
okay with the rather strong terms I used in my paper.)


Claus also wrote:
but we are in the grey area between permute until sorted and  
sort like this, and thinking of the modulus of a number as a  
static property (that might be derived incrementally from that of  
its predecessor) rather than an explicit check


I think modulus is a bit of a red herring -- its a symptom, not the  
disease.  I would argue that if the agent that crosses off 17s has  
to look at 19 and say that's okay, let that one pass through,  
whether it passes it through based on modulus, or because it is less  
than 289, it's still an examination of 19 by the cross-off-17 agent  
where that agent has the power to allow 19 through or not.   Seen  
that way, I have a very hard time accepting the algorithm as a  
faithful implementation of the Sieve of Eratosthenes.   Similarly, I  
have a hard time accepting the faithfulness an algorithm where 70  
doesn't get a visit from the cross-off agents for 5 and 7.


But I agree though, that it may be possible to perform progressive  
transformations from a ``folklore sieve'', which doesn't cross off  
according to the proper conventions to something that does.   
Somewhere along the way, there's going to be a grey area, and arguing  
about definitions in that grey area is likely to be a waste of time.   
At some point in the middle, you'll have something that can be viewed  
from both perspectives.   But the existence of such a progression of  
algorithms between algorithm A and algorithm B doesn't mean that A  
and B are fundamentally the same.


as it happens, even if we run folklore3 over [2..], 70 will be  
crossed off
exactly once, by the sieve for 2. the sieves for 5 and 7 run over  
the gap

left behind, as they do in folklore and folklore2 (one could move that
gap jumping from sieve3 to insert, though..). the sieves for 5 and  
7 know about 70 the same way that (`mod`5) and (`mod`7) know  
about 70, but that knowledge isn't called upon for sieving, only to  
find greater numbers with modulus 0.


Perhaps we're saying the same thing and misunderstanding each other,  
but here's what I see on an instrumented version of your code that  
shows the state of affairs at each loop iteration:

...
   (Prime 67,[(2,68),(3,69),(5,70),(7,70),(11,121), ...]),
   (Loops 68,[(2,68),(3,69),(5,70),(7,70),(11,121), ...]),
   (Loops 69,[(3,69),(2,70),(5,70),(7,70),(11,121), ...]),
   (Loops 70,[(2,70),(5,70),(7,70),(3,72),(11,121), ...]),
   (Loops 71,[(5,70),(7,70),(2,72),(3,72),(11,121), ...]),
   (Loops 71,[(7,70),(2,72),(3,72),(5,75),(11,121), ...]),
   (Prime 71,[(2,72),(3,72),(5,75),(7,77),(11,121), ...]),
   (Loops 72,[(2,72),(3,72),(5,75),(7,77),(11,121), ...]),
   (Loops 73,[(3,72),(2,74),(5,75),(7,77),(11,121), ...]),
   (Prime 73,[(2,74),(3,75),(5,75),(7,77),(11,121), ...]),
   (Loops 

Re: [Haskell-cafe] Hi can u explain me how drop works in Haskell

2007-02-25 Thread Thomas Hartman

Here's my, probably very obvious, contribution.

What I'd like feedback on is

1) code seem ok? (hope so!)
2) What do you think of the tests I did to verify that this
behaves the way I want? Is there a better / more idiomatic way to do
this?

**

[EMAIL PROTECTED]:~/learning/haskell/lists$ cat drop.hs
mydrop :: Int - [Int] - [Int]
mydrop 0 xs = xs
mydrop n xs = mydrop (n-1) (tail xs)

main = test
test = do print test1
print test2
print test3

test1 = mydrop 3 [1,2,3] == []
test2 = mydrop 2 [1,2,3] == [3]
test3 = mydrop 0 [1,2,3] == [1,2,3]
[EMAIL PROTECTED]:~/learning/haskell/lists$ runghc drop.hs
True
True
True

2007/2/26, iliali16 [EMAIL PROTECTED]:


Hi I am trying to implement the function drop in haskell the thing is that I
I have been trying for some time and I came up with this code where I am
trying to do recursion:

drop :: Integer - [Integer] - [Integer]
drop 0 (x:xs) = (x:xs)
drop n (x:xs)
|n  lList (x:xs) = dropN (n-1) xs :
|otherwise = []

So I want to understand how would this work and what exacttly should I put
as an answer on line 4 couse that is where I am lost. I know I might got the
base case wrong as well but I don't know what to think for it. I have done
the lList as a function before writing this one. Thanks to those who can
help me understand this. Thanks alot in advance! Have a nice day!
--
View this message in context: 
http://www.nabble.com/Hi-can-u-explain-me-how-drop-works-in-Haskell-tf3290490.html#a9152251
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe