Re: Stream of random comments continues

1998-12-04 Thread Mikael Rittri

Dave Tweed wrote:

> On Fri, 4 Dec 1998, Keith Wansbrough wrote:
>
> > Surely it would be better to split the one stream into several infinite ones:
> > - - -

>
> Closer inspection reveals this is not a necessarily a the best idea
> (particularly if you're going to repeat the trick several times on various
> subsets) because you can get nasty correlations between the various
> streams even from quite a good random number generator. There was a paper
> published in the JFP about a better way of splitting streams which I think
> appeared sometime between January 1996--October 1996.

This reminds me slightly of the Functional Pearl: On generating unique names
(Augustsson, Rittri, Synek), which appeared in JFP in January 1994.  [vol 4(1)]
We showed how to simulate an infinite binary tree of unique names,
by having a hidden call to gensym().
Instead of gensym(), I suppose one could hide a call to a random number
generator.  But I don't know if nasty correlations can appear in this case.

   -- Mikael Rittri






Re: Stream of random comments continues

1998-12-04 Thread Lennart Augustsson


> There was a paper
> published in the JFP about a better way of splitting streams which I think
> appeared sometime between January 1996--October 1996.
Are you perhaps referring to the paper by me, Mikael Rittri, and Dan Synek
called "On generating unique names" (Jan 94).  It has a low level trick to 
split streams in a good way that could be applied here.

-- Lennart





Re: Stream of random comments continues

1998-12-04 Thread D. Tweed

On Fri, 4 Dec 1998, Lennart Augustsson wrote:

> 
> > There was a paper
> > published in the JFP about a better way of splitting streams which I think
> > appeared sometime between January 1996--October 1996.
> Are you perhaps referring to the paper by me, Mikael Rittri, and Dan Synek
> called "On generating unique names" (Jan 94).  It has a low level trick to 
> split streams in a good way that could be applied here.

Looking on the JFP web-page I _think_ I was very off and it was in fact a
1992 paper that I skimmed through one day (whilst leafing through the
entire set of back issues of the JFP trying to orient myself, which
`explains' why I got confused enough to think it was a contemporary
paper).


F. Warren Burton and Rex L. Page. Distributed random number generation.
Journal of Functional Programming,
   2(2):203-212, April 1992.


IIRC, it attempts to deal precisely with the point that you want to be
able to pass infinite lists of (pseudo) random numbers (derived from a
central source) into lazy functions without getting (a) bad space leaks or
(b) nasty correlations between the sequences. 

Caveat: I've only looked at the title; I may be misremembering again.

Anyway, hope this is of some assistance.

___cheers,_dave_
email: [EMAIL PROTECTED]   "I'm guilty of a lot of things but I've
www.cs.bris.ac.uk/~tweed/pi.htm   never violated the law of conservation
work tel: (0117) 954-5253 of energy!" -- Bart Kosko, Nanotime







Re: Stream of random comments continues

1998-12-04 Thread Sven Panne

Keith Wansbrough wrote:
> Surely it would be better to split the one stream into several infinite ones:
> 
>   splitStream :: [a] -> ([a],[a])
> 
>   splitStream xs = unzip (spl xs)
> where spl (x:y:xs) = (x,y):(spl xs)
> 
> Then you don't have to know how many you are going to use from each stream.

If I'm not mistaken, this definition has the danger of a space leak:
Suppose you initially use a lot of elements of the first list/stream
only and then elements from the second, e.g.:

foo :: Int -> IO ()
foo n = do print . head . drop n $ xs
   print . head $ ys
   where (xs,ys) = splitStream (repeat 1)

This gives you space usage linear in n...   :-(

Cheers,
   Sven

-- 
Sven PanneTel.: +49/89/2178-2235
LMU, Institut fuer Informatik FAX : +49/89/2178-2211
LFE Programmier- und Modellierungssprachen  Oettingenstr. 67
mailto:[EMAIL PROTECTED]D-80538 Muenchen
http://www.pms.informatik.uni-muenchen.de/mitarbeiter/panne





Re: Stream of random comments continues

1998-12-04 Thread D. Tweed

On Fri, 4 Dec 1998, Keith Wansbrough wrote:

> Surely it would be better to split the one stream into several infinite ones:
> 
>   splitStream :: [a] -> ([a],[a])
> 
>   splitStream xs = unzip (spl xs)
> where spl (x:y:xs) = (x,y):(spl xs)
> 
> Then you don't have to know how many you are going to use from each stream.

Closer inspection reveals this is not a necessarily a the best idea
(particularly if you're going to repeat the trick several times on various
subsets) because you can get nasty correlations between the various
streams even from quite a good random number generator. There was a paper
published in the JFP about a better way of splitting streams which I think
appeared sometime between January 1996--October 1996. (Sorry I can't be
more specific -- I didn't really keep any notes on all the
frantic background reading I was doing :-) )

___cheers,_dave_
email: [EMAIL PROTECTED]   "I'm guilty of a lot of things but I've
www.cs.bris.ac.uk/~tweed/pi.htm   never violated the law of conservation
work tel: (0117) 954-5253 of energy!" -- Bart Kosko, Nanotime






Re: Stream of random comments continues

1998-12-04 Thread Ralf Hinze

| The discussion of random numbers in Haskell should perhaps
| move elsewhere as it is quite restricted, but it shows one
| problem with the functional approach to *practical* programming:
| in *my opinion* our fascination by the Monadic Approach to 
| Universe and Everything, and the possibility to program in an 
| imperative way has a negative side effect. My friends physicists
| asked me once after being told that imperative constructs are
| possible in Haskell: "so, what's the point in using this language?
| Fortran code is simpler than "do" in Haskell..."

Huuuh. Did you tell your friend that imperative constructs are the
`only' thing possible? Surely not. So I don't quite understand this
statement (on a scientific basis).

| I would change the
| Lennart example by using splitAt instead of take, and continuing the
| next
| instance with the remaining part of the master sequence.

But now you are basically programming within a state monad, the
infinite sequence of pseudo-random ints being the state. At its
simplest:

let (is1, ms1) = splitAt m ms
(is2, ms2) = splitAt n ms1
 

vs

do is <- ints m
   ds <- doubles m

However, having re-thought the whole thing I guess it's a good idea to
use an infinite stream as the state instead of the seed. As you
remarked in a previous mail this would make it simple to provide a
different generator. Nonetheless, I still believe that using a monad
(note necessarily IO) is the RIGHT approach, because the state is
passed implictly and it allows to hide the technical details (how many
Ints do you need for an Integer in the range (l,r)?) from the user.

Cheers, Ralf





Re: Stream of random comments continues

1998-12-04 Thread Keith Wansbrough

> Lennart advocates the use of streams :
> 
> >   let mis = take n (random ss1)
> >   is  = take n (toInt (random ss2))
> >   ds  = take n (toDouble (random ss3))
> >   in  ...
> 
> and I agree entirely with him, although the coding details might
> be different. Streams are nice and safe. And methodologically
> sane.

Surely it would be better to split the one stream into several infinite ones:

  splitStream :: [a] -> ([a],[a])

  splitStream xs = unzip (spl xs)
where spl (x:y:xs) = (x,y):(spl xs)

Then you don't have to know how many you are going to use from each stream.

--KW 8-)
-- 
: Keith Wansbrough, MSc, BSc(Hons) (Auckland) :
: PhD Student, Computer Laboratory, University of Cambridge, England. :
:  (and recently of the University of Glasgow, Scotland. [><] )   :
: Native of Antipodean Auckland, New Zealand: 174d47' E, 36d55' S.:
: http://www.cl.cam.ac.uk/users/kw217/  mailto:[EMAIL PROTECTED] :
:-: