Re: [Haskell-cafe] lazy A-star search

2011-10-22 Thread Richard Senington

How do you mean effective?

While I am not sure they mention A* search, you might like to look at 
the paper
"Modular Lazy Search for Constraint Satisfaction Problems" by Nordin & 
Tolmach.

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.4704

RS


On 22/10/11 13:28, Anton Kholomiov wrote:
Recently I was looking for an A-star search algorithm. I've found a 
package

but I couldn't understand the code. Then I saw some blogposts but they
 were difficult to understand too. I thought about some easier 
solution that

relies on laziness. And I've come to this:

Heuristic search is like depth-first search but solutions in sub-trees
are concatenated with mergeBy function, that concatenates two
list by specific order:

module Search where

import Control.Applicative
import Data.Function(on)
import Control.Arrow(second)
import Data.Tree

-- | Heuristic search. Nodes are visited from smaller to greater.
searchBy :: (a -> a -> Ordering) -> Tree a -> [a]
searchBy  heur (Node v ts) =
v : foldr (mergeBy heur) [] (searchBy heur <$> ts)

-- | Merge two lists. Elements concatenated in specified order.
mergeBy :: (a -> a -> Ordering) -> [a] -> [a] -> [a]
mergeBy _ a []  = a
mergeBy _ []b   = b
mergeBy p (a:as)(b:bs)
| a `p` b == LT= a : mergeBy p as (b:bs)
| otherwise = b : mergeBy p bs (a:as)


Now we can define specific heuristic search in terms of searchBy:

-- | Heuristic is distance to goal.
bestFirst :: Ord h => (a -> h) -> (a -> [a]) -> a -> [a]
bestFirst dist alts =
searchBy (compare `on` dist) . unfoldTree (\a -> (a, alts a))

-- | A-star search.
-- Heuristic is estimated length of whole path.
astar :: (Ord h, Num h) => (a -> h) -> (a -> [(a, h)]) -> a -> [a]
astar dist alts s0 = fmap fst $
searchBy (compare `on` astarDist) $ unfoldTree gen (s0, 0)
where astarDist (a, d) = dist a + d
  gen (a, d)  = d `seq` ((a, d), second (+d) <$> alts a)

I'm wondering is it effective enough?


Anton


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Flow Programming in FP

2011-06-28 Thread Richard Senington

On 20/06/11 15:45, Richard Senington wrote:

Hi all,

I have recently become interested in Dataflow programming and how it 
related to functional languages.
I am wondering if the community has any advice on reading matter or 
other directions to look at.


So far I have been looking through the FRP libraries, using Haskell 
functions with lazy lists for co-routines and
the Essence of Dataflow Programming by Uustalu and Vene where they 
propose using co-monads.


It looks as though Iteratees are also relevant but I have not got 
round to looking at them in detail yet.


Have I missed anything?

Regards

RS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



Thank you for all the feedback, it has been very useful and I am still 
looking into these sources.


RS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Flow Programming in FP

2011-06-21 Thread Richard Senington

On 20/06/11 16:37, David Barbour wrote:
On Mon, Jun 20, 2011 at 7:45 AM, Richard Senington 
mailto:sc06...@leeds.ac.uk>> wrote:


I have recently become interested in Dataflow programming and how
it related to functional languages.
I am wondering if the community has any advice on reading matter
or other directions to look at.

So far I have been looking through the FRP libraries, using
Haskell functions with lazy lists for co-routines and
the Essence of Dataflow Programming by Uustalu and Vene where they
propose using co-monads.

It looks as though Iteratees are also relevant but I have not got
round to looking at them in detail yet.

Have I missed anything?


Arrows are a useful model for dataflow programming. But several FRP 
models are arrowized, so you might already have observed this. Which 
FRP models have you looked at? (there are several)


I'm developing a model for reactive dataflows in open distributed 
systems, called reactive demand programming (RDP). It's basically 
distributed FRP with carefully constrained side-effects and signals 
that model disruption. The effects model enforces spatial idempotence 
and commutativity, which allows developers to perform refactoring and 
abstraction similar to that in a pure functional model. That signals 
model disruption allows 'open' composition and extension (e.g. runtime 
plugins). RDP is more composable than FRP because client-server 
relationships can be captured as regular RDP behaviors.


RDP isn't ready for release, yet, but you can read a bit more at my blog:

[1] http://awelonblue.wordpress.com/2011/05/21/comparing-frp-to-rdp/

Regards,

David Barbour



I have been looking through the papers by Conal Elliott and Paul Hudak, 
Hudak's book "The Haskell School of Expression" (chapters 13,15,17) and 
the latest version

of the Reactive library on Hackage.
In the past I have looked at Arrows, but I think I should have another 
look now, thanks for the suggestion.


RS
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data Flow Programming in FP

2011-06-20 Thread Richard Senington

Hi all,

I have recently become interested in Dataflow programming and how it 
related to functional languages.
I am wondering if the community has any advice on reading matter or 
other directions to look at.


So far I have been looking through the FRP libraries, using Haskell 
functions with lazy lists for co-routines and
the Essence of Dataflow Programming by Uustalu and Vene where they 
propose using co-monads.


It looks as though Iteratees are also relevant but I have not got round 
to looking at them in detail yet.


Have I missed anything?

Regards

RS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stacking data types

2011-04-06 Thread Richard Senington

On 06/04/11 20:32, Brandon Moore wrote:

From: Yves Parès
Sent: Wed, April 6, 2011 1:57:51 PM

Hello Café,

I'm trying to get some modular data types.
The idea that came to me is that I could stack them, for instance :

data Character a = Character { life :: Int,
   charaInner :: a }

data Gun a = Gun { firepower :: Int,
   gunInner :: a }

data Armor a = Armor { resistance :: Int,
   armorInner :: a }

Then a character with a gun and an armor can be build this way:

chara = Character 100 $ Armor 40 $ Gun 12

The idea now is to be able to get some part of the character:

I don't have a better design to suggest, but I think this may end up more
complicated than you want. In particular, it will probably be complicated to
make a collection of characters that have e.g. an Armor but maybe any
other stuff.

If you do want to use this sort of design, check out Wouter Swiestra's
paper "Data Type a la Cart".

Brandon.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
I was struggling with this idea a few weeks ago. I think I tried the 
same types of extensions that you have, also with no luck.


I then started looking at extensible records on the Haskell wiki
http://www.haskell.org/haskellwiki/Extensible_record

This was the one I liked, but as far as I could tell it has not been 
implemented (please tell me if I was wrong about that)

http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html

RS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Combinatorial Problems

2011-03-06 Thread Richard Senington

Hello all,

I have just uploaded a very early (0.0.1) version of a library I am 
working on to hackage. The library is for representing some well known  
combinatorial problems, and provide loading routines for standard 
repositories of these problems.


At the moment is only support SAT-3 and the SATLIB loading routines. It 
does have a set of functions for TSP, but I have not yet finished 
reworking the TSPLIB loading code yet. Hopefully that will be coming soon.


It is a very early version so not heavily tested, but I thought it would 
be better to put it up so that if anyone is interested they can have a look
and make suggestions. Any comments, bug reports or modification ideas, 
please feel free to email me.


RS


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Where to put a library

2011-03-02 Thread Richard Senington

Hi all,

I am working on ways to implement local search meta-heuristics in Haskell.
I need various test problems to experiment on, and this has resulted in me
building various file parsers and data structures for these.
Currently I have versions for SAT, TSP and a couple of more private file 
formats.


The file parsers are designed to process files coming out of the TSPLIB 
and SATLIB
repositories. I also want to extend this to cover scheduling and 
timetabling example

problems.

Since these are all related I was going to try to put them together into 
a single

library and post them to Hackage, but I am not sure what to put them under.
I assume that Data is for data structures, which is not quite right.

I was thinking about something like Problems. as a top level point in a
hierarchy.  Does anyone have any other suggestions, as this seems a
little odd as a name to me?

Alternatively, does this already exist and I have just missed it?

Cheers

RS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splittable random numbers

2010-11-12 Thread Richard Senington

On 12/11/10 20:56, Bryan O'Sullivan wrote:
On Fri, Nov 12, 2010 at 12:34 PM, Luke Palmer > wrote:



Yeah I think a package of randomness tests could be really useful.
 Cool :-)


There are already well-established suites of very thorough PRNG tests, 
such as Diehard and Big Crush. Please don't invent another.
Thankyou for the advice, but since I am just learning about some of this 
stuff, how about I have ago at implementing some of their tests?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splittable random numbers

2010-11-12 Thread Richard Senington

On 11/11/10 21:34, Luke Palmer wrote:

On Thu, Nov 11, 2010 at 3:13 AM, Richard Senington  wrote:
   

I got hold of, and looked through the paper suggested in the root of this
thread “Pseudo random trees in Monte-Carlo", and based upon this
I have thrown together a version of the binary tree based random number
generator suggested.

I would like to point out that I do not know very much about random number
generators, the underlying mathematics or any subsequent papers on this
subject, this is just a very naive implementation based upon this one paper.

As a question, the following code actually generates a stream of numbers
that is more random than I was expecting, if anyone can explain why I would
be very interested.
 

What do you mean more random than you were expecting?  Shouldn't they
be "maximally random"?

   
My issue is how it should react, given how the underlying data structure 
works.
If you just use this tree to generate numbers, you are taking 
Left,Left,Left .

If you split the tree, you get Left and Right.

So, in my test code at the bottom I have taken a generator and then 
generated 10 numbers from it.
Then we split. The left hand branch (g1 in the test) should generate 
numbers that are just "tail.randoms $ gen"

but this is not what happens, at least not for raw integers.

I have been doing some more testing, and if you limit the range (0-1000 
seems to be stable) then it works as described above, however
if you use wider ranges, or even the maximum range, then the sequences 
do not match as expected.
This worries me, as one advantage of PRNGs (see paper as I am 
paraphrasing) is repeatability, or certain expected properties.


The underlying system is working, so I probably have the range or data 
type conversion wrong somewhere.

You can test the underlying tree like so.

rawTest :: LehmerTree->IO()
rawTest t = do print $ myTake 10 t
   print $ myTake 10 $ leftBranch t
   print $ myTake 10 $ rightBranch t
  where
myTake 0 _ = []
myTake x t = nextInt t : (myTake (x-1) (leftBranch t))


BTW, nice module.  Do you want to hackage it up?  If not, I will.

   
I would be happy to hackage it up, but I think this is a bit premature. 
I started to read a bit more about PRNGs, and I came across tests for
randomness. It seemed that a library of test systems for RandomGens 
would be quite cool, so I started coding last night. That is far too
premature to even post up here, but in short, this system gives some 
very odd results.


For example, mean averages (I tried medians but that did not tell me 
much, I am going to look at modals some time this weekend).


mean :: [Float]->Double
mean [] = error "empty list has no mean?"
mean xs = ((sum.(map realToFrac)) xs) / (fromIntegral.length $ xs)

rangedMeanTest :: RandomGen g=>g->Int->(Int,Int)->Double
rangedMeanTest g count range = let p = take count $ randomRs range g
   in mean (map fromIntegral p)

So, I am testing discrete randomness, ints. We take a range we wish to 
generate (0-3 for example), and generate some number of test values (I 
used 1000).
This list is converted into floats, and averaged. We can then predict 
what we think the average should be, given that this is an unbiased 
uniform (or nearly uniform) system.


It does not give the results you would want. This may have something to 
do with picking "good" parameters for the mkLehmerTree function.

For example, using a random setup, I just got these results
result   expected  range
16.814  expected = 16.0  (1,31)
16.191  expected = 16.5  (1,32)
16.576  expected = 17.0  (1,33)
17.081  expected = 17.5  (1,34)
17.543  expected = 18.0  (1,35)

In short, I am worried by the properties of this random number 
generator. I propose improving the testing system, and then posting both 
the test suite and this random generator to

Hackage, unless you really want it up now in a very very preliminary form.

RS


import System.Random

data LehmerTree = LehmerTree {nextInt :: Int,
   leftBranch :: LehmerTree,
   rightBranch :: LehmerTree}

instance Show LehmerTree where
   show g = "LehmerTree, current root = "++(show $ nextInt g)

mkLehmerTree :: Int->Int->Int->Int->Int->Int->LehmerTree
mkLehmerTree aL aR cL cR m x0 = innerMkTree x0
   where
 mkLeft x = (aL * x + cL) `mod` m
 mkRight x = (aR * x + cR) `mod` m
 innerMkTree x = let l = innerMkTree (mkLeft x)
 r = innerMkTree (mkRight x)
 in LehmerTree x l r

mkLehmerTreeFromRandom :: IO LehmerTree
mkLehmerTreeFromRandom = do gen<-getStdGen
 let a:b:c:d:e:f:_ = randoms gen
 return $ mkLehmerTree a b c d e f
 

This can be pure:

mkLehmerTreeFromRandom :: (RandomGen g) =>  g ->  LehmerTree

   

instance R

Re: [Haskell-cafe] Splittable random numbers

2010-11-11 Thread Richard Senington
I got hold of, and looked through the paper suggested in the root of 
this thread "Pseudo random trees in Monte-Carlo 
", and based upon this
I have thrown together a version of the binary tree based random number 
generator suggested.


I would like to point out that I do not know very much about random 
number generators, the underlying mathematics or any subsequent papers 
on this subject, this is just a very naive implementation based upon 
this one paper.


As a question, the following code actually generates a stream of numbers 
that is more random than I was expecting, if anyone can explain why I 
would be very interested.


import System.Random

data LehmerTree = LehmerTree {nextInt :: Int,
  leftBranch :: LehmerTree,
  rightBranch :: LehmerTree}

instance Show LehmerTree where
  show g = "LehmerTree, current root = "++(show $ nextInt g)

mkLehmerTree :: Int->Int->Int->Int->Int->Int->LehmerTree
mkLehmerTree aL aR cL cR m x0 = innerMkTree x0
  where
mkLeft x = (aL * x + cL) `mod` m
mkRight x = (aR * x + cR) `mod` m
innerMkTree x = let l = innerMkTree (mkLeft x)
r = innerMkTree (mkRight x)
in LehmerTree x l r

mkLehmerTreeFromRandom :: IO LehmerTree
mkLehmerTreeFromRandom = do gen<-getStdGen
let a:b:c:d:e:f:_ = randoms gen
return $ mkLehmerTree a b c d e f

instance RandomGen LehmerTree where
  next g = (fromIntegral.nextInt $ g, leftBranch g)
  split g = (leftBranch g, rightBranch g)
  genRange _ = (0, 2147483562) -- duplicate of stdRange



test :: IO()
test = do gen<-mkLehmerTreeFromRandom
  print gen
  let (g1,g2) = split gen
  let p = take 10 $ randoms gen :: [Int]
  let p' = take 10 $ randoms g1 :: [Int]
  -- let p'' = take 10 $ randoms g2 :: [Float]
  print p
  print p'
  -- print p''


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splittable random numbers

2010-11-04 Thread Richard Senington

I might have a use for this, so I could give it a go.
I'll have a look through this post in detail tomorrow morning.

RS

On 04/11/10 17:38, Simon Peyton-Jones wrote:


Hi Cafe

A while back there was a thread 
 
about a good implementation of a (pseudo) random number generator with 
a good "split" operation.  There's lots of material on generators that 
generate a linear *sequence* of random numbers, but much less on how 
to generate a *tree* of random numbers, which is what Haskell's 
System.Random API requires.


I happened to meet Burton Smith recently, who wrote some early papers 
about this stuff (eg "Pseudo random trees in Monte-Carlo 
"), so I asked him.


His reply is below, along with some follow-up comments from his 
colleagues Tolga Acar and Gideon Yuval.   The generator uses crypto 
functions, so it's probably more computationally expensive than common 
linear-sequence generators, but in exchange you get robust splitting.


Does anyone feel like taking the idea and turning it into a Haskell 
library?   (Or even a Haskell Wiki page?)  I'm taking the liberty of 
cross-posting to the libraries list.


Simon

*From:* Burton Smith
*Sent:* Tuesday, November 02, 2010 3:58 PM
*To:* Simon Peyton-Jones
*Cc:* Gideon Yuval (Gideon Yuval); Tolga Acar
*Subject:* Random number generation

With some help from Gideon and Tolga, I think the solution to the 
"arbitrary tree of random numbers problem" is as follows:


The generator G is a pair comprising a crypto key G.k and an integer 
counter (the "message") G.c.  The (next G) operation returns a pair: 
1. a random integer r obtained by encrypting G.c with G.k, and 2. a 
new generator G' such that G'.k = G.k and G'.c = G.c + 1.  The (split 
G) operation is similar, returning the same G', except that instead of 
returning a random integer r it returns a third generator G'' such 
that G''.k = r and G''.c = 0.


A suitable block cipher system might be 128-bit AES (Rijndael).  
Unencumbered implementations exist in a variety of languages, and 
performance is pretty good and will improve dramatically as hardware 
support improves.  I'd pick both crypto key size and the size of the 
result r to be 128 bits, and employ a  64 bit counter c.  Other crypto 
options exist.


*From:* Simon Peyton-Jones
*Sent:* Wednesday, November 03, 2010 3:11 AM
*To:* Burton Smith; Gideon Yuval (Gideon Yuval)
*Cc:* Tolga Acar; Simon Peyton-Jones
*Subject:* RE: Random number generation

Burton, Gideon, Tolga

Aha, that's interesting.   I'd never seen a random number generator 
based on crypto, but it seems like an attractive idea.  As I 
understand it, successive calls to 'next' will give you


  encrypt(0), encrypt(1), encrypt(2), encrypt(3),

Is this standard?  Does it have provably good randomness properties, 
(cycle length, what else?) like other RNGs?  Or does it simply seem 
very plausible?


Can I send it round to the Haskell mailing list, in the hope that 
someone will turn the idea into a library?   (Ideally I'd like to make 
claims about the randomness properties in doing so, hence my qns above.)


*From:* Gideon Yuval (Gideon Yuval)
*Sent:* Wednesday, November 03, 2010 7:15 AM
*To:* Simon Peyton-Jones; Burton Smith
*Cc:* Tolga Acar
*Subject:* RE: Random number generation

As long as the key, and the non-counting part of the counter, are 
kept" secret", anyone who can distinguish these pseudorandoms from 
real random, in less than 2^128 steps, has a nice paper for 
crypto-2011 (this is known as "provable security") concerning a 
weakness in AES128.


One exception: real randoms have a birthday paradox; the pseudorandoms 
suggested do not. If you care, you can:


(1) Limit the counter to 2^32 steps (paradox has 2^-64 probability) or 
even 2^16 (2^-96), then rekey; or


(2) XOR 2 such encrypted counters, with different keys; or

(3) XOR *_3_* successive values for the same counter (just possibly 
cheaper; top-of-head idea).


More hard-core: swap the position of key & message: encrypting a 
constant "secret" with 1,2,3,4 Gives pseudorandoms with *_no_* 
birthday paradox.


*From:* Tolga Acar
*Sent:* 03 November 2010 15:50
*To:* Gideon Yuval (Gideon Yuval); Simon Peyton-Jones; Burton Smith
*Subject:* RE: Random number generation

Simon,

The general idea is not really that new in the crypto area with 
constraints Gideon describes, of course. That is typically called a 
PRNG -- Pseudo Random Number Generator, or in another parlance, 
Deterministic Random Bit Generators (DRBG). The DRBG constructions 
based on hash functions and block ciphers are even standardized in 
NIST publication SP800-90 (even though I may not recommend every one 
of them).


As for the construction below, that is based on the AES block cipher, 
that essentially takes advantage of the PRP (Pseudo Random 
Permutation) property of the AES block cipher, as each block cipher 
ought to b