Re: [Haskell-cafe] Deducing a type signature

2010-05-19 Thread Dan Weston

 (i)  strange f g = g (f g)

 Assume g :: a - b.  Then f :: (a - b) - c.  But since g :: a - b,
 f g :: a, so c = a.  Therefore, f :: (a - b) - a, and g (f g) :: a.
 Therefore, strange :: ((a - b) - a) - (a - b) - a.

Almost. The return type of strange is the same as the return type of g 
(the outermost function), namely b.


So strange :: ((a - b) - a) - (a - b) - b.

Dan

R J wrote:
Bird 1.6.3 requires deducing type signatures for the functions strange 
and stranger.


Are my solutions below correct?

(i)  strange f g = g (f g)

Assume g :: a - b.  Then f :: (a - b) - c.  But since g :: a - b,
f g :: a, so c = a.  Therefore, f :: (a - b) - a, and g (f g) :: a.
Therefore, strange :: ((a - b) - a) - (a - b) - a.

(ii)  stranger f = f f

Assume f :: a - b.  Since f f is well-typed, type unification requires
a = b.  Therefore, f :: a - a, and stranger :: (a - a) - a.


Hotmail is redefining busy with tools for the New Busy. Get more from 
your inbox. See how. 
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_2




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type families vs. functional dependencies -- how to express something

2010-05-18 Thread Dan Weston

 Unifying those two types by hand, I get:

 P (A t - B a)
 ~  P (B a)

Maybe the problem is that type families (and associated types, their 
class cousins) are not injective: P x ~ P y does not imply that x ~ y. 
Maybe you need a data type (with appropriate wrapping and unwrapping) to 
ensure injectivity. Cf:


http://www.haskell.org/haskellwiki/GHC/Type_families#Injectivity.2C_type_inference.2C_and_ambiguity
http://www.mail-archive.com/haskell-cafe@haskell.org/msg63359.html

Dan

Tomáš Janoušek wrote:

Hello all,

for the past few hours I've been struggling to express a certain idea using
type families and I really can't get it to typecheck. It all works fine using
functional dependencies, but it could be more readable with TFs. From those
papers about TFs I got this feeling that they should be as expressive as FDs,
and we should use them (some people even occasionally mentioning that FDs may
be deprecated in favor of them). Can someone help me make the following work,
please?

The working code with functional dependencies:


{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE ScopedTypeVariables #-}

__ = undefined

data HNil = HNil
data HCons a b = HCons a b

data A a = A a
data B a = B a

class X a l p | a - l p where
ll :: a - l
pp :: a - p

instance X (B l) HNil (B l) where
ll _ = HNil
pp p = p

instance (X x l p) = X (A t - x) (HCons t l) p where
ll _ = HCons __ (ll (__ :: x))
pp p = pp (p (A (__ :: t)))

-- (inferred)
-- testX :: (X a l (B l)) = a - B l
testX x = pp x `asTypeOf` B (ll x)


The motivation here is that A a represents a variable of type a, B b
represents a computation with state b, and functions of type A a - B b
represent a computations with state b that use the first parameter as an
accessor for a variable (or, more precisely, state component) of type a. Now,
I want testX to take such function (with n parameters) and provide it with
those accessors, fixing the type b to contain components of the corresponding
accessors only.
(The example uses dummy types and undefineds for simplicity.)

This pretty much works:

testX (B __)   :: B HNil
testX (\(A _) - B __) :: B (HCons t HNil)
testX (\(A True) - B __)  :: B (HCons Bool HNil)
testX (\(A _) (A _) - B __)   :: B (HCons t (HCons t1 HNil))
testX (\(A _) (A _) (A _) - B __) :: B (HCons t (HCons t1 (HCons t2 HNil)))
testX (\(A _) - B HNil)   :: error


This is my attempt to rephrase it using type families:


class X' a where
type L a
type P a
ll' :: a - L a
pp' :: a - P a

instance X' (B l) where
type L (B l) = HNil
type P (B l) = B l
ll' _ = HNil
pp' p = p

instance (X' x) = X' (A t - x) where
type L (A t - x) = HCons t (L x)
type P (A t - x) = P x
ll' _ = HCons __ (ll' (__ :: x))
pp' p = pp' (p (A (__ :: t)))

-- (inferred)
-- testX' :: (B (L a) ~ P a, X' a) = a - P a
testX' x = pp' x `asTypeOf` B (ll' x)


Only the X' (B l) instance works, the other produces a strange error:

testX' (B __)   :: P (B HNil) :: B HNil
testX' (\(A _) - B __) :: error

Couldn't match expected type `a'
   against inferred type `Main.R:L(-) t (B a)'
  Expected type: P (A t - B a)
  Inferred type: B (L (A t - B a))

Unifying those two types by hand, I get:

P (A t - B a)
~  P (B a)
~  B a

B (L (A t - B a))
~  B (HCons t (L (B a)))
~  B (HCons t HNil)

Hence, a = HCons t HNil. But GHC doesn't get that.

I'm using GHC 6.12.1.

Thanks for any hints. Best regards,


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] newbie question how to pass data

2010-04-19 Thread Dan Weston

First of all, your function
func  (x,y) s dg  =((x*(cos dg) - y*(sin dg)),(x*(sin dg) - y*(cos dg)))
does NOT work for type (Float - Float), unless you mean that that is 
the type of the unused parameter s. Also, your desired type ((Float - 
Float) - Bool) itself looks suspicious. It must accept any function 
(without something to apply it to) and arbitrarily return True or False. 
How will you decide which? I suspect you need another parameter for this 
function.


Second, on the off chance you are trying to calculate the position on a 
circle scaled then rotated an angle dg from (x,y), that new position is


f (x,y) s dg = (s*(x*(cos dg) - y*(sin dg)),s*(x*(sin dg) + y*(cos dg)))

in which case you are missing the s and the last minus sign in your 
formula should be a plus sign.
If so, this can be evaluated with greater clarity (and probably 
accuracy) in polar coordinates:


g (x,y) s dg = (r * cos a, r * sin a)
  where r  = s * sqrt (x^2 + y^2)
   a  = atan2 y x + dg

Third, if you did not need the scale, I would use an underscore to make 
that clear:


h (x,y) _ dg = (r * cos a, r * sin a)
  where r  = sqrt (x^2 + y^2)
   a  = atan2 y x + dg

That's all the observations I can make unless you describe the problem 
more clearly. Sorry.


Dan

Mujtaba Boori wrote:
sorry 

ok I am trying to make these calculation 


func  (x,y) s dg  =((x*(cos dg) - y*(sin dg)),(x*(sin dg) - y*(cos dg)))

This work for type (Float - Float)

but how can make it work with ((Float - Float) - Bool)

because my main function that I want use with.  it takes (Float,Float) 
-Bool)  I need to return the same type ((Float,Float) -Bool)  so it 
could be used with other function. 



On Mon, Apr 19, 2010 at 5:54 PM, Ozgur Akgun ozgurak...@gmail.com 
mailto:ozgurak...@gmail.com wrote:


Can you at least give an example of how you intend to use this func?
Since you do not describe it's behaviour, it is very hard to make a
useful
comment (at least for me)

Best,

On 19 April 2010 16:54, Mujtaba Boori mujtaba.bo...@gmail.com
mailto:mujtaba.bo...@gmail.com wrote:
 
  Hello
  I am sorry for the silly question.
 
  I have a function as the following
  func:: ((Float,Float) -Bool) - Float - ((Float,Float) - Bool)
  I am trying to make calculation in this type ((Float,Float)
-Bool)  with Float and then pass the information to ((Float,Float)
- Bool)
 
  Thank again appreciated.
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



--
Ozgur Akgun




--
Mujtaba Ali Alboori



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] search Data.Tree.Zipper

2010-03-08 Thread Dan Weston

I think you want

find :: Foldable t = (a - Bool) - t a - Maybe a


Jian Fan wrote:

Hi,

There doesn't seem to be a function to search the tree so
I come up with following function:

searchTree :: (a - Bool) - TreeLoc a - Maybe (TreeLoc a)
searchTree pred rootLoc =
  if pred (getLabel rootLoc) then
Just rootLoc
else case firstChild rootLoc of
  Just loc - case searchTree pred loc of
Just loc - Just loc
Nothing - case right loc of
  Just rLoc - searchTree pred rLoc
  Nothing - Nothing
  Nothing - Nothing

Which feels quite ugly. Any suggestions? Thanks.

Jian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] foldl in terms of foldr

2010-01-26 Thread Dan Weston


 f :: a - b - c is a function that takes an a, a b, and returns a c.

 g :: (a - b) - c takes one argument, which is expected to be a
 function from a to b.  g returns a c.

 That stuff I mentioned before about variable binding and function
 application still applies.  We can show that f and g have isomorphic
 types.  But they are conceptually different types.

Except that f and g are not isomorphic. In fact, there exists no defined 
fuction g :: (a - b) - c

(what type would (g id) be?)

Perhaps you meant g :: a - (b - c)?

Alexander Solla wrote:

On Jan 26, 2010, at 8:11 AM, Xingzhi Pan wrote:


I'm a little confused with the type of step here.  Can it be
considered as taking 2 or 3 arguments and then the compiler has to
infer to decide?


The compiler is going to build up a sequence of functions.  Consider  
the following (mathematical) function:


f(x, y, z) = x^2 + y^2 + z^2.

This is a function in three arguments.  But if you bind any of them  
(say, x) to a value x', you end up with a function g(y,z) =  
f(x',y,z).  This is a function in two arguments.  Bind another  
variable, and you get another function, with one less argument.  You  
need to bind all the variables in order to compute f for a point.



 Say if I, as a code reader, meet such a function
defined with three formal parameters, how can I draw the conclusion of
its type (and it takes 2 arguments actually)?



You can derive this from the syntactic properties of types.  Count the  
number of arrows that are not in parentheses.  That's how many  
arguments the function takes.


f :: a - b - c is a function that takes an a, a b, and returns a c.

g :: (a - b) - c takes one argument, which is expected to be a  
function from a to b.  g returns a c.


That stuff I mentioned before about variable binding and function  
application still applies.  We can show that f and g have isomorphic  
types.  But they are conceptually different types.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invertible functions list

2009-12-28 Thread Dan Weston

This might be pertinent:

Alimarine et al, There and Back Again: Arrows for Invertible Programming
http://www.cs.ru.nl/A.vanWeelden/bi-arrows/

Jonathan Fischoff wrote:
Hi, 

I would to create a list of tuples (or something similar) of invertible 
functions


[((a - b), (b - a)), ((b - c), (c - b)), 

Such that I could call

forward invertibleFuctionList domainValue = ? -- composite all the functions
backward invertibleFuctionList rangeValue = 
forward (reverse invertibleFuctionList) rangeValue  -- or something 
similar



I would also like to concat them. This sounds like a job for GADT that 
someone might have already tackled. Any ideas?


-Jonathan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restrictions on associated types for classes

2009-12-17 Thread Dan Weston

I think the denotational meanings are different. The instance also implies:

For each Cl t there must exist a Cl u where u does not unify with [v] 
for some v.

In other words, there must be a ground instance.

For the class declaration, the existence of a ground instance can be 
inferred only by excluding infinite types with strict type unification 
semantics. If infinite types were admitted (where type unification is 
done non-strictly), the class declaration allows for infinite types (let 
t ~ [t] in t). The instance does not.


Dan

Martin Sulzmann wrote:

The statements

class Cl [a] = Cl a

and

instance Cl a = Cl [a]

(I omit the type family constructor in the head for simplicyt)

state the same (logical) property:

For each Cl t there must exist Cl [t].

Their operational meaning is different under the dictionary-passing 
translation [1].

The instance declaration says we build dictionary Cl [a] given the
dictionary Cl [a]. The super class declaration says that the dictionary 
for Cl [a]

must be derivable (extractable) from Cl a's dictionary. So, here
we run into a cycle (on the level of terms as well as type inference).

However, if we'd adopt a type-passing translation [2] (similar to
dynamic method lookup in oo languages) then there isn't
necessarily a cycle (for terms and type inference). Of course,
we still have to verify the 'cyclic' property which smells like
we run into non-termination if we assume some inductive reason
(but we might be fine applying co-induction).

-Martin

[1] Cordelia V. Hall, Kevin Hammond 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/h/Hammond:Kevin.html, 
Simon L. Peyton Jones 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/j/Jones:Simon_L=_Peyton.html, 
Philip Wadler 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/w/Wadler:Philip.html: 
Type Classes in Haskell. ACM Trans. Program. Lang. Syst. 18 
http://www.informatik.uni-trier.de/%7Eley/db/journals/toplas/toplas18.html#HallHJW96(2): 
109-138 (1996)


[2] Satish R. Thatte: Semantics of Type Classes Revisited. LISP and 
Functional Programming 1994 
http://www.informatik.uni-trier.de/%7Eley/db/conf/lfp/lfp1994.html#Thatte94: 
208-219


On Thu, Dec 17, 2009 at 6:40 PM, Simon Peyton-Jones 
simo...@microsoft.com mailto:simo...@microsoft.com wrote:


|  Hmm.  If you have
|class (Diff (D f)) = Diff f where
| 
|  then if I have
|  f :: Diff f = ...
|  f = e
|  then the constraints available for discharging constraints arising
|  from e are
|  Diff f
|  Diff (D f)
|  Diff (D (D f))
|  Diff (D (D (D f)))
|  ...
| 
|  That's a lot of constraints.
|
| But isn't it a bit like having an instance
|
|Diff f = Diff (D f)

A little bit.  And indeed, could you not provide such instances?
 That is, every time you write an equation for D, such as
   type D (K a) = K Void
make sure that Diff (K Void) also holds.

The way you it, when you call f :: Diff f = blah, you are obliged
to pass runtime evidence that (Diff f) holds.  And that runtime
evidence includes as a sub-component runtime evidence that (Diff (D
f)) holds.   If you like the, the evidence for Diff f looks like this:
   data Diff f = MkDiff (Diff (D f)) (D f x - x - f x)
So you are going to have to build an infinite data structure.  You
can do that fine in Haskell, but type inference looks jolly hard.

For example, suppose we are seeking evidence for
   Diff (K ())
We might get such evidence from either
 a) using the instance decl
instance Diff (K a) where ...
or
 b) using the fact that (D I) ~ K (), we need Diff I, so
   we could use the instance
 instance Diff I

Having two ways to get the evidence seems quite dodgy to me, even
apart from the fact that I have no clue how to do type inference for it.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restrictions on associated types for classes

2009-12-17 Thread Dan Weston
Oops, reverse that. The *instance* declaration allows for infinite 
types, the *class* declaration does not.


Dan Weston wrote:

I think the denotational meanings are different. The instance also implies:

For each Cl t there must exist a Cl u where u does not unify with [v] 
for some v.

In other words, there must be a ground instance.

For the class declaration, the existence of a ground instance can be 
inferred only by excluding infinite types with strict type unification 
semantics. If infinite types were admitted (where type unification is 
done non-strictly), the class declaration allows for infinite types (let 
t ~ [t] in t). The instance does not.


Dan

Martin Sulzmann wrote:

The statements

class Cl [a] = Cl a

and

instance Cl a = Cl [a]

(I omit the type family constructor in the head for simplicyt)

state the same (logical) property:

For each Cl t there must exist Cl [t].

Their operational meaning is different under the dictionary-passing 
translation [1].

The instance declaration says we build dictionary Cl [a] given the
dictionary Cl [a]. The super class declaration says that the dictionary 
for Cl [a]

must be derivable (extractable) from Cl a's dictionary. So, here
we run into a cycle (on the level of terms as well as type inference).

However, if we'd adopt a type-passing translation [2] (similar to
dynamic method lookup in oo languages) then there isn't
necessarily a cycle (for terms and type inference). Of course,
we still have to verify the 'cyclic' property which smells like
we run into non-termination if we assume some inductive reason
(but we might be fine applying co-induction).

-Martin

[1] Cordelia V. Hall, Kevin Hammond 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/h/Hammond:Kevin.html, 
Simon L. Peyton Jones 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/j/Jones:Simon_L=_Peyton.html, 
Philip Wadler 
http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/w/Wadler:Philip.html: 
Type Classes in Haskell. ACM Trans. Program. Lang. Syst. 18 
http://www.informatik.uni-trier.de/%7Eley/db/journals/toplas/toplas18.html#HallHJW96(2): 
109-138 (1996)


[2] Satish R. Thatte: Semantics of Type Classes Revisited. LISP and 
Functional Programming 1994 
http://www.informatik.uni-trier.de/%7Eley/db/conf/lfp/lfp1994.html#Thatte94: 
208-219


On Thu, Dec 17, 2009 at 6:40 PM, Simon Peyton-Jones 
simo...@microsoft.com mailto:simo...@microsoft.com wrote:


|  Hmm.  If you have
|class (Diff (D f)) = Diff f where
| 
|  then if I have
|  f :: Diff f = ...
|  f = e
|  then the constraints available for discharging constraints arising
|  from e are
|  Diff f
|  Diff (D f)
|  Diff (D (D f))
|  Diff (D (D (D f)))
|  ...
| 
|  That's a lot of constraints.
|
| But isn't it a bit like having an instance
|
|Diff f = Diff (D f)

A little bit.  And indeed, could you not provide such instances?
 That is, every time you write an equation for D, such as
   type D (K a) = K Void
make sure that Diff (K Void) also holds.

The way you it, when you call f :: Diff f = blah, you are obliged
to pass runtime evidence that (Diff f) holds.  And that runtime
evidence includes as a sub-component runtime evidence that (Diff (D
f)) holds.   If you like the, the evidence for Diff f looks like this:
   data Diff f = MkDiff (Diff (D f)) (D f x - x - f x)
So you are going to have to build an infinite data structure.  You
can do that fine in Haskell, but type inference looks jolly hard.

For example, suppose we are seeking evidence for
   Diff (K ())
We might get such evidence from either
 a) using the instance decl
instance Diff (K a) where ...
or
 b) using the fact that (D I) ~ K (), we need Diff I, so
   we could use the instance
 instance Diff I

Having two ways to get the evidence seems quite dodgy to me, even
apart from the fact that I have no clue how to do type inference for it.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why?

2009-12-11 Thread Dan Weston

Luke Palmer wrote:

The idea being that any code that is pure could be evaluated anywhere
with a very simple interpreter.  If you have pure code, you can trace
it back and evaluate it in a sandbox where you don't need a C runtime,
a linker, or really anything but the simplest substitution engine.
*All* effects bubble their way up to the top level, so that we know
from the type signature of a value the machinery we will need to run


The alternative is not much better: syntactic sugar (say a wrapping 
keyword similar to deriving) that wraps up a pure type in a State, ST, 
or IO. The inevitable result is that *every* type from the lazy 
programmer will be so wrapped. Many programmers overdo the IO monad as 
it is. With suitable sugar, they will become addicted!


Dan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Zumkeller numbers

2009-12-09 Thread Dan Weston
Ouch. That's what happens when you let a machine do the translation. How 
about:


Once your good name is trashed, you can live unabashed.


David Virebayre wrote:

On Wed, Dec 9, 2009 at 11:47 AM, Henning Thielemann
lemm...@henning-thielemann.de wrote:


Ist der Ruf erst ruiniert, lebt es sich ganz ungeniert. 8-]



Is there an English translation of it?


Google translate says : If the reputation is ruined, one can live
quite openly.

David.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forkSequence, runPar, parallelize

2009-12-09 Thread Dan Weston
It's a good thing then that forkExec and return are denotationally equal 
(though not operationally). Otherwise, I'd be worried.


Matthew Brecknell wrote:

Antoine Latter wrote:

A similar function that I'm fond of:

forkExec :: IO a - IO (IO a)


It's cute that forkExec already has a dual operation with just the right
name (specialised to IO):

join :: IO (IO a) - IO a



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Applicative but not Monad

2009-11-02 Thread Dan Weston
Obviously you know what your talking about and I don't, so this is a 
question purely out of ignorance.


It seems to me that Tomorrow cannot be parametrically polymorphic, or 
else I could wrap it again (Tomorrow (Tomorrox x)). An unwrapping 
fixpoint operator needs to reflect the type to know when not to go too 
far. One solution is to guarantee that it can go as far as it wants with 
a comonad (stopping when the function argument wants to, not when the 
data type runs out of data):


import Control.Comonad
import Control.Monad.Fix

tfix :: Comonad tomorrow = (tomorrow x - x) - x
tfix = extract . (\f - fix (extend f))

To quote Cabaret, if tomorrow belongs to me, then surely the day after 
belongs to me as well.


Otherwise, to stop the fixpoint, it seems you need a more restricted 
type to encode some stopping sentinel (my own parametrically polymorphic 
attempts all end in infinite types, so maybe ad hoc polymorphism or a 
type witness is needed?)


Do you have a blog post on this problem?

Dan

Conor McBride wrote:

On 2 Nov 2009, at 00:11, Ross Paterson wrote:


On Sun, Nov 01, 2009 at 04:20:18PM +, Conor McBride wrote:

On 31 Oct 2009, at 10:39, Conor McBride wrote:

I have an example, perhaps not a datatype:
tomorrow-you-will-know

Elaborating, one day later,

 if you know something today, you can arrange to know it tomorrow
 if will know a function tomorrow and its argument tomorrow, you
   can apply them tomorrow
 but if you will know tomorrow that you will know something the
   day after, that does not tell you how to know the thing tomorrow

Yes, but if you will know tomorrow that you will know something
tomorrow, then you will know that thing tomorrow.


That depends on what tomorrow means tomorrow.


The applicative does coincide with a monad, just not the one you first
thought of (or/max rather than plus).


True, but it's not the notion I need to analyse circular programs.
I'm looking for something with a fixpoint operator

   fix :: (Tomorrow x - x) - x

which I can hopefully use to define things like

   repmin :: Tree Int - (Int, Tomorrow (Tree Int))

so that the fixpoint operator gives me a Tomorrow Int which I
can use to build the second component, but ensures that the
black-hole-tastic Tomorrow (Tomorrow (Tree Int)) I also receive
is too late to be a serious temptation.

All the best

Conor

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Applicative but not Monad

2009-10-30 Thread Dan Weston

Can you elaborate on why Const is not a monad?

return x = Const x
fmap f (Const x) = Const (f x)
join (Const (Const x)) = Const x

What am I missing?

Tom Davie wrote:
Of note, there is a sensible monad instance for zip lists which I 
*think* agrees with the Applicative one, I don't know why they're not 
monads:


instance Monad (ZipList a) where
  return = Ziplist . return
  join (ZipList []) = ZipList []
  join (ZipList (a:as)) = zlHead a `zlCons` join (map zlTail as)

I'll provide an alternative though, Const a is an applicative, but not a 
monad.


Bob

On Fri, Oct 30, 2009 at 5:25 PM, Eugene Kirpichov ekirpic...@gmail.com 
mailto:ekirpic...@gmail.com wrote:


Yes. ZipList.
http://en.wikibooks.org/wiki/Haskell/Applicative_Functors

2009/10/30 Yusaku Hashimoto nonow...@gmail.com
mailto:nonow...@gmail.com:
  Hello cafe,
  Do you know any data-type which is Applicative but not Monad?
 
  Cheers,
  -~nwn
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



--
Eugene Kirpichov
Web IR developer, market.yandex.ru http://market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Time and space complexity of take k . sort

2009-10-22 Thread Dan Weston
Unless of course you use a GHC RULE to rewrite the RHS into the LHS, 
which should always be a valid transformation.


Ketil Malde wrote:

Paul Johnson p...@cogito.org.uk writes:


 takeLargest k = take k . sort



But of equal practical interest is the space complexity.  The optimum
algorithm is to take the first k items, sort them, and then iterate
through the remaining items by adding each item to the sorted list and
then throwing out the highest one.  That has space complexity O(k).
What does the function above do?


Well - 'sort' doesn't know the value of 'k', so it needs to retain all
elements, just in case 'k' might be 'n'.  So I don't see how you can use
space less than 'n' for a construct like the above.

-k


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a null statement that does nothing?

2009-10-21 Thread Dan Weston
If you have a long if/else if/else chain, you might consider a trivial 
case statement with guards. Whether you think this is attractive is a 
matter of taste, but it has the fall-through semantics you want and ghc 
optimizes the _ pattern matching away:


f x = case () of
  _| x == 2- 22
  _| x == 4- 44
  _| x == 7- 77
  _| otherwise - 55

 f 4
44
 f 9
55

michael rice wrote:

Thanks guys,

I understand what you're telling me, but have some nested IFs and just 
want to fall through on one of the ELSES but then I end up with two 
ELSES in a row and nothing between them. Oh, well, on to restructuring.


Michael

--- On *Wed, 10/21/09, Tim Wawrzynczak /inforichl...@gmail.com/* wrote:


From: Tim Wawrzynczak inforichl...@gmail.com
Subject: Re: [Haskell-cafe] Is there a null statement that does nothing?
To: michael rice nowg...@yahoo.com
Cc: haskell-cafe@haskell.org
Date: Wednesday, October 21, 2009, 8:49 PM

Yes, an if statement must have both 'then' and 'else' branches.  As
an example, what if you had

let a = if b == 2 then True else False

and you were missing an else branch?  What would 'a' get assigned to?

The if statement returns a value so must have both branches.

However, in a monadic constraint, there are the functions 'when' and
'unless.'  They allow conditional evaluation of expressions in a
monadic context.  For example,

main = do
  line - getLine
  when (line == hello) putStrLn Hello back!

Cheers,
 - Tim


On Wed, Oct 21, 2009 at 7:43 PM, michael rice nowg...@yahoo.com
/mc/compose?to=nowg...@yahoo.com wrote:

It looks like both the THEN and the ELSE in an IF expression
must each have an expression. What's a graceful way to do
nothing in either or both slots, kind of like the Fortran
CONTINUE statement.

  --mr



[mich...@localhost ~]$ ghci
GHCi, version 6.10.3: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude if (1==1) then else

interactive:1:15: parse error on input `else'
Prelude if (1==1) then True else

interactive:1:24: parse error (possibly incorrect indentation)
Prelude if (1==1) then True else False
True
Prelude



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org /mc/compose?to=haskell-c...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is proof by testing possible?

2009-10-12 Thread Dan Weston
Could that nice precise formulation simply be Scott continuity, which in 
turn preserves compactness through composition and under application?


Dan Piponi wrote:

On Mon, Oct 12, 2009 at 11:31 AM, Neil Brown nc...@kent.ac.uk wrote:

swap = undefined

Terminates and does not swap its arguments :-)  What do free theorems say
about this, exactly -- do they just implicitly exclude this possibility?


I'm terrible at reasoning about functions with bottoms (ie. undefined
or non-termination).

But I suspect that a property like this holds: if the type signature
of a function f is polymorphic in a, and it doesn't produce bottom for
one particular value x of type a, for some particular type a, f can
never produce bottom for any choice of x in any choice of type a.
(Which is not to say it might not fail, but that the failure will be
an issue with x, not f.)

The intuition behind this is that if a function is polymorphic in a
then it never examines the a. So even if a is bottom, the function can
never know it. But it could fail because f additionally accepts as
argument a polymorphic function that itself accepts a's, and that
fails. But then it wouldn't be f's fault, it'd be the fault of the
function you passed in.

This is very poor of me. There's probably a nice precise formulation
of what I've just said :-)
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to understand the 'forall' ?

2009-09-16 Thread Dan Weston
There is no magic here. This is merely explicit type specialization from 
the most general inferred type to something more specific. The 
denotational semantics of a function whose type is specialized does not 
change for those values belonging to the more specialized type.


f :: forall a. (Num a) = a - a - a
f x y = x + y

g :: Int - Int - Int
g x y = x + y

f and g denote (extensionally) the identical function, differing only in 
their type. g is a specialization of f. It is possible that 
(operationally) f could be slower if the compiler is not clever enough 
to avoid passing a type witness dictionary.


h :: forall b. b - Char
h = const 'a'

k :: () - Char
k = const 'a'

data Void
m :: Void - Char
m = const 'a'

n :: (forall a. a) - Char
n = const 'a'

h, k, m, and n yield *identical* values for any input compatible with 
the type of any two of the functions.


In constrast, whether a function is strict or non-strict is *not* a 
function of type specialization. Strictness is not reflected in the type 
system. Compare:


u x y = y `seq` const x y
v x y = const x y

Both have type forall a b. a - b - a but are denotationally different 
functions:


u 2 undefined = undefined
v 2 undefined = 2

Cristiano Paris wrote:

On Wed, Sep 16, 2009 at 7:12 PM, Ryan Ingram ryani.s...@gmail.com wrote:

Here's the difference between these two types:

test1 :: forall a. a - Int
-- The caller of test1 determines the type for test1
test2 :: (forall a. a) - Int
-- The internals of test2 determines what type, or types, to instantiate the
argument at


I can easily understand your explanation for test2: the type var a is
closed under existential (?) quantification. I can't do the same for
test1, even if it seems that a is closed under universal (?)
quantification as well.


Or, to put it another way, since there are no non-bottom objects of type
(forall a. a):


Why?


test1 converts *anything* to an Int.


Is the only possible implementation of test1 the one ignoring its
argument (apart from bottom of course)?


test2 converts *nothing* to an Int

-- type-correct implementation
-- instantiates the (forall a. a) argument at Int
test2 x = x



-- type error, the caller chose a and it is not necessarily Int
-- test1 x = x

-- type-correct implementation: ignore the argument
test1 _ = 1


Cristiano
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding state in GUIs (qtHaskell)

2009-09-10 Thread Dan Weston

One simple solution is to leave the state in Qt.

As of Qt 4.2, in C++ you can use

  bool QObject::setProperty(const char * name, const QVariant  value)
  QVariant QObject::property(const char * name) const

to set and get properties on any QObject (hence any QWidget).

Since I believe these are (not yet) wrapped in QtHaskell, you can 
instead just create a widget that contains the state and just don't add 
it to a layout. Parent it to a widget and it will quietly disappear when 
its parent dies. If you want it to persist until you say so, don't 
parent it to anything (but then you might as well use Haskell for your 
state!)


Dan

Michael P Mossey wrote:
I'm trying to learn qtHaskell. I realize few people on this list know anything 
about qtHaskell, but I have a question that probably relates to all GUIs as 
implemented in Haskell. I just need a hint that could help me figure out the 
next step, which I might be able to infer from the qtHaskell API.


I don't think is any tutorial-type or step-by-step type documentation for 
qtHaskell. I have sent some questions to the author of qtHaskell, David Harley, 
but he hasn't responded yet, and anyway I don't want to trouble him every time I 
have a question, so I'm trying to infer as much as I can.


The problem relates to state. In Qt, one adds state to a widget by subclassing 
it and adding new member variables. For example, I want to create a widget that 
responds to keypresses and remembers what keypresses have taken place.


I'm totally stuck on this part, because Haskell doesn't have state. There must 
be some kind of Haskell call that adds state to a widget, but it is hard to 
figure out from the qtHaskell examples David provides.


Thanks,
Mike
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forkM fails

2009-09-04 Thread Dan Weston

Try  instead of `seq`.

Alberto G. Corona wrote:

Hi

I need to execute a procedure not in the IO monad, but in an any monad:

I defined:

forkM :: Monad m= m a -  IO ThreadId
forkM proc=forkIO $ proc  `seq` return()

I assumed  that seq will force the evaluation of proc and after, it
will discard his type (m a) and return () in the IO monad.as forkIO
expect.

however proc is not executed

Prelude Control.Concurrent.forkIO $ print hola
ThreadId 331
hola
Prelude
Prelude let forkM p=Control.Concurrent.forkIO $ p `seq` return ()
Prelude forkM $ print hola
ThreadId 493
Prelude

Any idea?. Thanks in advance
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cannot build greencard

2009-09-02 Thread Dan Weston
Yet strangely, the last upload was Sun Apr 19 21:42:04 UTC 2009 and 
hackage claims it builds without failure with ghc-6.10.


And in fact it builds just fine for me, so maybe it is worth finding out 
why it doesn't build for you. Are you using ghc-6.10.4 and the latest 
version of cabal? I get:


 ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.4

 cabal --version
cabal-install version 0.6.2
using version 1.6.0.3 of the Cabal library

 cabal install greencard

Resolving dependencies...
Downloading greencard-3.0.3...
Configuring greencard-3.0.3...
Preprocessing library greencard-3.0.3...
Preprocessing executables for greencard-3.0.3...
Building greencard-3.0.3...
[1 of 1] Compiling Foreign.GreenCard ( lib/Foreign/GreenCard.hs, 
dist/build/Foreign/GreenCard.o )

/usr/bin/ar: creating dist/build/libHSgreencard-3.0.3.a
[ 1 of 21] Compiling Package  ( src/Package.lhs, 
dist/build/greencard/greencard-tmp/Package.o )
[ 2 of 21] Compiling ErrMonad ( src/ErrMonad.lhs, 
dist/build/greencard/greencard-tmp/ErrMonad.o )
[ 3 of 21] Compiling PrettyUtils  ( src/PrettyUtils.lhs, 
dist/build/greencard/greencard-tmp/PrettyUtils.o )
[ 4 of 21] Compiling Target   ( src/Target.lhs, 
dist/build/greencard/greencard-tmp/Target.o )
[ 5 of 21] Compiling Casm ( src/Casm.lhs, 
dist/build/greencard/greencard-tmp/Casm.o )


src/Casm.lhs:544:1:
Warning: Pattern match(es) are overlapped
 In a case alternative: _ - ...

src/Casm.lhs:577:1:
Warning: Pattern match(es) are overlapped
 In a case alternative: _ - ...

src/Casm.lhs:616:4:
Warning: Pattern match(es) are overlapped
 In a case alternative: _ - ...

src/Casm.lhs:631:5:
Warning: Pattern match(es) are overlapped
 In a case alternative: _ - ...
[ 6 of 21] Compiling GCToken  ( src/GCToken.lhs, 
dist/build/greencard/greencard-tmp/GCToken.o )
[ 7 of 21] Compiling ListUtils( src/ListUtils.lhs, 
dist/build/greencard/greencard-tmp/ListUtils.o )
[ 8 of 21] Compiling Name ( src/Name.lhs, 
dist/build/greencard/greencard-tmp/Name.o )
[ 9 of 21] Compiling Type ( src/Type.lhs, 
dist/build/greencard/greencard-tmp/Type.o )
[10 of 21] Compiling DIS  ( src/DIS.lhs, 
dist/build/greencard/greencard-tmp/DIS.o )
[11 of 21] Compiling FillInMonad  ( src/FillInMonad.lhs, 
dist/build/greencard/greencard-tmp/FillInMonad.o )
[12 of 21] Compiling NameSupply   ( src/NameSupply.lhs, 
dist/build/greencard/greencard-tmp/NameSupply.o )
[13 of 21] Compiling Decl ( src/Decl.lhs, 
dist/build/greencard/greencard-tmp/Decl.o )
[14 of 21] Compiling FillIn   ( src/FillIn.lhs, 
dist/build/greencard/greencard-tmp/FillIn.o )
[15 of 21] Compiling LexM ( src/LexM.lhs, 
dist/build/greencard/greencard-tmp/LexM.o )
[16 of 21] Compiling Lex  ( src/Lex.lhs, 
dist/build/greencard/greencard-tmp/Lex.o )
[17 of 21] Compiling MarshallMonad( src/MarshallMonad.lhs, 
dist/build/greencard/greencard-tmp/MarshallMonad.o )
[18 of 21] Compiling Proc ( src/Proc.lhs, 
dist/build/greencard/greencard-tmp/Proc.o )
[19 of 21] Compiling Parse( 
dist/build/greencard/greencard-tmp/Parse.hs, 
dist/build/greencard/greencard-tmp/Parse.o )


dist/build/greencard/greencard-tmp/Parse.hs:1122:1:
Warning: Pattern match(es) are overlapped
 In a case alternative: _ - ...
[20 of 21] Compiling Process  ( src/Process.lhs, 
dist/build/greencard/greencard-tmp/Process.o )
[21 of 21] Compiling Main ( src/GreenCard.lhs, 
dist/build/greencard/greencard-tmp/Main.o )

Linking dist/build/greencard/greencard ...
Installing library in
/net/homedirs/westondan/.cabal/lib/greencard-3.0.3/ghc-6.10.4
Installing executable(s) in /net/homedirs/westondan/.cabal/bin
Registering greencard-3.0.3...
Reading package info from dist/installed-pkg-config ... done.
Writing new package config file... done.

Bulat Ziganshin wrote:

Hello mf-hcafe-15c311f0c,

Thursday, September 3, 2009, 12:44:25 AM, you wrote:

to the best of my knowledge, GC was developed in the days of ghc 0.20
or so and last 10 years it's superceded by FFI addendum to Haskell'98
standard. there is also more high-level tools like hsc2hs and HSFFIG



hi,



i am stuck with a linker error in greencard, and haven't found
anything online, so i am addressing you for fresh ideas.  as soon as i
get this sorted out, i will try to turn the answer into a patch that
you can consider for the next release.



SYMPTOMS: greencard 3.0.3 and 3.01 do not compile with ghc-6.8 (debian
lenny package) and 6.10 (darcs copy, checked out yesterday).  here is
what happens:




4 (0) 19:27:19 m...@yoyo:/tmp2 $ tar xvpzf greencard-3.0.3.tar.gz
greencard-3.0.3/
greencard-3.0.3/ANNOUNCE
greencard-3.0.3/dist/
greencard-3.0.3/dist/build/
greencard-3.0.3/dist/build/greencard/
greencard-3.0.3/dist/build/greencard/greencard-tmp/

Re: [Haskell-cafe] Is logBase right?

2009-08-24 Thread Dan Weston
I don't know if anyone actually answered the question you didn't ask, 
but you can always improve an inaccurate guess when you need to. A limit 
will always exist, and should be unique (independent of the initial 
guess), assuming (+) and (*) are well-conditioned.


In practice, a single first-order Taylor step should be enough:

logBase' :: Double - Double - Double
logBase' b y = if b == 0.0 then 1.0 else improve x0
   where bLogInv = 1.0 / log(b)
 f x = x + (1.0-b**x/y) * bLogInv

 -- First step is enough, if we guess smartly
 improve = f
 x0  = log(y) * bLogInv

 -- or use the limit from any initial guess
 -- improve x = let y = f x in if y == x then y else improve y
 -- x0  = 0.0

Dan

Roberto wrote:
Hi, 


There is a mistake is logBase:

$ ghci
GHCi, version 6.10.4: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude logBase 10 10
1.0
Prelude logBase 10 100
2.0
Prelude logBase 10 1000
2.9996  --- rrgg!
Prelude logBase 10 1
4.0


My host is a Debian GNU/Linux 5.0.2 (lenny) with the following GHC packages:

ii  ghc6 6.10.4-1   
ii  ghc6-doc 6.10.4-1  
ii  libghc6-mtl-dev  1.1.0.2-7+b1  
ii  libghc6-utf8-string-dev  0.3.5-1+b1
ii  libghc6-x11-dev  1.4.5-6   
rc  libghc6-x11-doc  1.4.2-1   
ii  libghc6-x11-xft-dev  0.3-3+b3  
ii  libghc6-xmonad-contrib-dev   0.8.1-3+b3
rc  libghc6-xmonad-contrib-doc   0.8-2 
ii  libghc6-xmonad-dev   0.8.1-5   


Regards!


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Where do I put the seq?

2009-08-20 Thread Dan Weston

Peter,

I think you are right that there is no way in general to prevent a valid 
graph rewrite to remove a vacuous dependency. That is why seq is there.


The funny business is visible right in the type signature of seq:

seq :: forall a t. a - t - t

If seq had nonstrict semantics, this would be isomorphic to t - t, 
which is inhabited only by id. So, if seq is going to have any useful 
effect, it must be strict. Since Haskell is nonstrict by default (absent 
some deconstruction, which requires knowledge of the value 
constructors), you need an extra-language primitive to do this. Don't 
look to case to do this.


And other strictifying syntax constructs like ! are just syntactic 
sugar, so they don't count.


Dan

Peter Verswyvelen wrote:
I totally agree that data dependencies are the best way to do that. And 
I'm beginning to see why interact might not be suitable for 
demonstrating FRP. 

On the other hand, when you say data dependencies, you mean that the 
value of expression A depends on the value of expression B, but what if 
that value is not really needed? 

For example, suppose you want a program that asks the name of the user 
and then outputs What a nice name or What a weird name depending on 
some random value. Even though the input value from the user is not 
used, we still can't output the text before the input is entered. Again 
the hidden dependency is time itself I guess, so we should do it the 
real FRP way, even with dumb console text IO.


Also doesn't Haskell's IO system uses a hidden RealWorld type that has 
no value but which is passed from between monadics binds in a strict way 
to make the ordering work? So IO in Haskell is a horrible hack then? :-) 
If it would be done nicely, in the FRP way, then RealWorld IO would need 
time stamps to get rid of the hack?
  
So to do console IO the FRP way (say like in Reactive), the input lines 
 from the user would be Event String, and the output also Event String. 
Each event occurrence has a time stamp, and when merged, they would be 
ordered. It would be nice to show this example in Reactive. Too bad 
Reactive doesn't work (and it's not sure it ever will according to the 
comment of some users), but for a simple example like this, I'm sure it 
works. In Yampa, I'm not sure how console based IO would work, I guess 
it would need to generate event non-occurrences (Nothing) when the user 
did not type anything, and we would need non-blocking IO, so 100% CPU 
load, since it's pull based, not sure, to be investigated. I haven't 
worked with Elerea nor Grapefruit yet, but I'm not sure if I should 
treat the former as a real FRP system since it is not referentially 
transparent (it would be nice to know which combinators break it).
 
On the other hand, in this simple example, I could use a strict field in 
an ADT to enforce the input-output dependency, but I guess this is just 
the same big hack? 
(see http://hpaste.org/fastcgi/hpaste.fcgi/view?id=8316#a8367)


This silly example is causing lots of feedback, cool :-)

On Thu, Aug 20, 2009 at 7:12 PM, Lennart Augustsson 
lenn...@augustsson.net mailto:lenn...@augustsson.net wrote:


Using seq to control a program's semantics (as in, input-output
behaviour) is a horrible hack.
The seq operation there to control space and time aspects of your
program.
(The specification of seq doesn't even say that the first argument is
evaluated before the second one.)
You should use data dependencies to control your program's semantics.

On Thu, Aug 20, 2009 at 4:34 PM, David Leimbachleim...@gmail.com
mailto:leim...@gmail.com wrote:
 
 
  On Thu, Aug 20, 2009 at 2:52 AM, Jules Bean
ju...@jellybean.co.uk mailto:ju...@jellybean.co.uk wrote:
 
  Peter Verswyvelen wrote:
 
  Not at all, use it for whatever you want to :-)
 
  I'm writing this code because I'm preparing to write a bunch of
tutorials
  on FRP, and I first wanted to start with simple console based
FRP, e.g.
  making a little text adventure game, where the input/choices of
the user
  might be parsed ala parsec, using monadic style, applicative
style, and
  arrows, and then doing the same with FRP frameworks like
 
 
  This is a really bad place to start a FRP tutorial IMO.
 
  The interface for 'interact' does not make any promises about
the relative
  evaluation order of the input list / production order of the
output list.
 
  That's why you are having to play horrible tricks with seq to
try to force
  the order to be what you want.
 
  I don't think this is the basis of a robust system or a sensible
tutorial.
 
  Just my 2c.
 
  Interesting feedback, but I don't get the reason really.  How is
using seq a
  horrible trick?  It's there for strict evaluation when you need
it, and in
  this case it was warranted.
  And as far as saying it's not a good 

Re: [Haskell-cafe] Type family signatures

2009-08-14 Thread Dan Weston
But presumably he can use a data family instead of a type family to 
restore injectivity, at the cost of adding an extra wrapped bottom value 
and one more layer of value constructor?


David Menendez wrote:

On Fri, Aug 14, 2009 at 11:06 AM, Thomas van Noorttho...@cs.ru.nl wrote:

Hello,

I have a question regarding type family signatures. Consider the following
type family:

 type family Fam a :: *

Then I define a GADT that takes such a value and wraps it:

 data GADT :: * - * where
   GADT :: a - Fam a - GADT (Fam a)

and an accompanying unwrapper:

 unwrap :: GADT (Fam a) - (a, Fam a)
 unwrap (GADT x y) = (x, y)

When Fam is declared using the first notation,

 type family Fam a :: *

GHC HEAD gives the following error message:

 Main.hs:9:21:
   Couldn't match expected type `a' against inferred type `a1'
 `a' is a rigid type variable bound by
 the type signature for `unwrap' at Main.hs:8:20
 `a1' is a rigid type variable bound by
  the constructor `GADT' at Main.hs:9:8
   In the expression: x
   In the expression: (x, y)
   In the definition of `unwrap': unwrap (GADT x y) = (x, y)


This is because type families are not injective. Nothing stops you
from defining two instances such as,

type instance Fam Int = Bool
type instance Fam Char = Bool

in which case a value of type GADT Bool is ambiguous. Does it contain
an Int or a Char?


However, when Fam is declared as (moving the a to the other side of the ::
and changing it into *),

 type family Fam :: * - *

everything is ok. So, it seems to me that GHC HEAD considers both signatures
to be really different. However, I do not quite understand the semantic
difference in my example, other than that Fam needs to be fully satisfied in
its named type arguments. Note that GHC 6.10.3 does not accept the latter
signature for Fam since it requires at least one index for some reason,
that's why I'm using GHC HEAD.


A type family with no index is equivalent to a type synonym.

But in answer to your question, these signatures are very different.
Consider these families.

type family Foo a b :: *
type family Bar a :: * - *

Foo is indexed by two parameters, but Bar is only indexed by one.

type instance Foo A B = X
type instance Foo A C = X
-- Foo a b ~ Foo a c does not imply b ~ c

type instance Bar A = []
-- Bar a b ~ Bar a c does imply b ~ c

Bar returns a type constructor, so it can be used anywhere a type
constructor of kind * - * can be used.

foo :: (Functor (Foo a)) = ...   -- invalid
bar :: (Functor (Bar a)) = ...   -- valid



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-14 Thread Dan Weston

My intuition says the proper formalism is that undo is left adjoint to redo.

They together form a monad in the category of redoable actions. return 
lifts doable actions to undoable ones by attaching an empty undo stack. 
join lowers (reflects) a first-class undoable action out of the undo 
stack and makes it doable.


The reason that the adjunction view is more fundamental here is that the 
monad is in some sense the equivalence class of all observationally 
equivalent undo/redo pairs. That is, undo need not actually restore the 
previous state: it is sufficient to restore any action that, when 
redone, gives the same state as the one before undo.


There may be justification for this idea in Rossiter et al, Process as 
a World Transaction 
(http://computing.unn.ac.uk/staff/CGNR1/anpa064.pdf), though I haven't 
had time to read it yet.


Dan

Peter Verswyvelen wrote:

I think version is control is really just a subset of a larger effect
theory. E.g. I've been experimenting with a parallel undo/redo system
in C#, where some actions can commute and be undone separately, and
for detecting this, the actions need to explicitly expose what they
will change; so this also seems in the same line of research (and has
been reported earlier in the thread darcs as undo/redo system).

And if I recall correctly, imperative compilers can reorder and
parallelize instructions based on what they read/write; again this
feels the same.

Like John said, it will be interesting when the operating system
itself exposes all dependencies (what it reads/writes), so that if
e.g. content of file A is used to generate content of file B, that
this is not some spooky action at a distance. Now the OS is treated
like a big black IO box (realworld in - realworld out), just because
we're still stuck with dumb hierarchical file systems and other opaque
IO.

Another example might be FRP / Yampa, and the recent work of Hai
(Paul) Liu, Paul Hudak and co, where causal commutative arrows are
invented. AFRP computations really commute, while standard arrows are
just a generalization of monads, so not really suitable for capturing
the parallel nature of AFRP. The way I discovered this myself, is that
when you have e.g. a large tree of user interface widgets, represented
by a big arrow circuit, and the user edits just the one widget in one
branch (which happens when e.g. the mouse is captured), then with the
current arrows system all branches will be visited depth first. But of
course only the path towards the widget that will change needs to be
visited, all the other remain constant.

Since I don't have any academic backgrounds - only intuition - I'm not
sure if these topics are related, but they sure feel like it :-)

On Fri, Aug 14, 2009 at 11:38 PM, Jason Dagitda...@codersbase.com wrote:


On Fri, Aug 14, 2009 at 1:41 PM, John A. De Goes j...@n-brain.net wrote:

Hmmm, my point (perhaps I wasn't clear), is that different effects have
different commutability properties. In the case of a file system, you can
commute two sequential reads from two different files. This has no effect on
the result of the computation, assuming no interference from other programs
-- and if there _is_ interference from other programs, then guarantees go
out the window, _with or without_ commuting.
Monads are an insufficient structure to capture the fine semantics of
effects. Something more powerful is needed. And in general, the way effects
combine and commute is quite complicated and needs to be baked into the
effect system, rather than being the responsibility of a lowly developer.

It's really interesting.  This is related to the reasoning darcs does with
patches (aka patch theory).  Patches tend to have effects on your
repository.  Sometimes those effects can be reordered without changing the
final state of the repository.  Other times, it is not possible to reorder
them without either having a non-sensible final state or different final
states.  I've never thought about reading research about effect systems for
the sake of version control.  I'll have to look into this.

Jason

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec: using two different parser for the same string

2009-08-06 Thread Dan Weston

Paul,

Arrows (and category theory in general) are interesting, but you 
certainly don't need to understand them for this.
The only arrow in this code is the lowly function arrow (-). () and 
(|||) are duals of each other and mean, respectively, both and 
either (though for some bizarre reason, both is usually called 
fanout!)


This style of pointfree (or pointless) code is clearer to me because I 
don't have a bunch of variable names to invent and have lying around.


Anyway, if you prefer, don't import Control.Arrow at all, and just use:

-- |Both: Apply two functions to same argument and tuple the results
infixr 3 
() :: (a - b) - (a - c) - a - (b,c)
(f  g) x = (f x, g x)

-- |Either: If argument is Left, apply Left function, else apply Right 
function

infixr 2 |||
(|||) :: (a - c) - (b - c) - Either a b - c
(|||) = either

either is implicitly imported from the Prelude and is defined as:

-- | Case analysis for the 'Either' type.
-- If the value is @'Left' a@, apply the first function to @a@;
-- if it is @'Right' b@, apply the second function to @b...@.
either  :: (a - c) - (b - c) - Either a b - c
either f _ (Left x) =  f x
either _ g (Right y)=  g y

Dan

Paul Sujkov wrote:

Hi Dan,

thank you for the solution. It looks pretty interesting and usable, 
however I'll have to spend some time understanding arrows: I never had 
an opportunity to use them before. Anyway, it looks very close to what I 
actually need, and in any case much less ugly than breaking the 
GenParser encapsulation


2009/8/6 Dan Weston weston...@imageworks.com 
mailto:weston...@imageworks.com


Of course, since ParsecT s u m is a functor, feel free to use fmap
instead of parsecMap. Then you don't need to import from
Text.Parsec.Prim.
And in hindsight, I might prefer the name (:) or cons to () for
the first function, but now I'm just obsessing. :)

Dan


Dan Weston wrote:

I think parsecMap does the job here:

---
import Text.ParserCombinators.Parsec hiding ((|))
import Text.Parsec.Prim(parsecMap)
import Control.Applicative((|))
import Control.Arrow((|||),())

-- Tagged (:)
() :: Either Char Char - Either String String - Either
String String
Left  a  Left  b = Left  (a:b)
Left  a  Right b = Left  (a:b)
Right a  Left  b = Left  (a:b)
Right a  Right b = Right (a:b)

-- Tagged concat
stringParser :: [Either Char Char] - Either String String
stringParser = foldr () (Right )

-- Parse Integer if properly tagged, keeping unparsed string
maybeToInteger :: Either String String - (Maybe Integer, String)
maybeToInteger = (const Nothing ||| Just . read)  (id ||| id)

-- Tagged-choice parser
intOrStringParser = parsecMap (maybeToInteger . stringParser)
  $ many1 (parsecMap Right digit | parsecMap Left (noneOf ;)))

-- Parse between parentheses
intOrStringListParser = between (char '(')
(char ')')
(sepBy1 intOrStringParser (char
';'))
---

Then you get a tagged version of each string, along with the
string itself:

*P parseTest intOrStringListParser $ (1;2w4;8;85)
[(Just 1,1),(Nothing,2w4),(Just 8,8),(Just 85,85)]

There may be some parsecMap-fold fusion optimization possible,
though I haven't looked into that.

Dan

Paul Sujkov wrote:

Hi everybody,

suppose I have two different parsers: one just reads the
string, and another one parses some values from it. E.g.:

parseIntList :: Parser [Integer]
parseIntList = do
 char '('
 res - liftM (map read) (sepBy1 (many1 digit) (char ';'))
 char ')'
 return res

parseIntString :: Parser String
parseIntString = manyTill anyChar eof

so for some input like this - (1;2;3;4) - I will have two
different result:

*Parlog parseTest parseIntList (1;2;3;4)
[1,2,3,4]
*Parlog parseTest parseIntString (1;2;3;4)
(1;2;3;4)

but the thing that I actually want is something like Parser
([Integer], String) - results from both parsers at a time,
no matter whether one of them fails or not:

*Parlog parseTest parseIntListAndString (1;2;3;4)
([1,2,3,4], (1;2;3;4))

it is impossible at first sight, because first parser to use
will consume all the input, and there will be nothing to
parse for the second one

Parsec contains choice function, but it is implemented via
| and that is mplus - so it tries second alternative only
if the first one fails

Re: [Haskell-cafe] Hugs faster and more reliable than GHC for large list monad 'do' block

2009-08-06 Thread Dan Weston
I assume for the return line, you meant to return a list, not a tuple. 
ghc doesn't support a 600-tuple.
In any case, returning a list, I have verified that this problem exists 
in ghc 6.10.3, for -O0 and -O2.


For -O0, it compiles and links fine, but gives this runtime message:

z: internal error: scavenge: unimplemented/strange closure type -1 @ 
0x2a95a8e000

(GHC version 6.10.3 for x86_64_unknown_linux)
Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
Abort

Maybe it is attempting to unroll these loops, even with -O0?

Dan

Henning Thielemann wrote:
Today a student has shown me a program that consists of a large 'do' 
block for the list monad. The program looks like


do x1 - [0..3]
   x2 - [0..2]
   ...
   x600 - [0..5]
   guard (x1+x2+2*x3 = 0)
   ...
   return (x1,x2,,x600)

It was actually generated by another program. The results were:

  GHC-6.4 was not able to compile that program at all, because it stopped 
because of memory exhaustion.
  GHC-6.8.2 finished compilation after two minutes but the program aborted 
quickly because of a corrupt thunk identifier.

  GHC-6.10 not yet tested.
  Hugs-2006 executed the program without complaining and showed the first 
result after a few seconds: (0,0,0,0,0,...,0).


Eventually the program must run on a Linux cluster with a not up-to-date 
Linux kernel, that is, I suspect newer GHC versions cannot be used due to 
the 'timer_create' problem. (At least, the 'cabal' executable that I 
generated with a GHC-6.8.2 had this problem when running on the cluster 
which reminded me on the problems with GHC-6.8 itself running on older 
Linux kernels.)


Since the list monad sorts the variable values in lexicographic order 
which is inappropriate for the considered problem, I recommended the use 
of control-monad-omega. Luke, I hope this monad can cope with 600 
variables. :-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hugs faster and more reliable than GHC for large list monad 'do' block

2009-08-06 Thread Dan Weston

No, I am using the latest released ghc:

 ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.4

[ z.hs is attached ]

 time ghc -O0 --make z.hs
[1 of 1] Compiling Main ( z.hs, z.o )
Linking z ...
14.422u 0.630s 0:15.10 99.6%0+0k 0+0io 0pf+0w

 time ./z
z: internal error: scavenge: unimplemented/strange closure type -1 @ 
0x2a95a8e000

(GHC version 6.10.4 for x86_64_unknown_linux)
Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
Abort
0.007u 0.007s 0:00.02 0.0%  0+0k 0+0io 0pf+0w

Dan

Neil Mitchell wrote:

Hi

I think the issue you're running in to with 6.4 is this one:
http://hackage.haskell.org/trac/ghc/ticket/830 - known and fixed a
while back.

Thanks

Neil

On Thu, Aug 6, 2009 at 9:59 PM, Dan Westonweston...@imageworks.com wrote:

I assume for the return line, you meant to return a list, not a tuple. ghc
doesn't support a 600-tuple.
In any case, returning a list, I have verified that this problem exists in
ghc 6.10.3, for -O0 and -O2.

For -O0, it compiles and links fine, but gives this runtime message:

z: internal error: scavenge: unimplemented/strange closure type -1 @
0x2a95a8e000
   (GHC version 6.10.3 for x86_64_unknown_linux)
   Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
Abort

Maybe it is attempting to unroll these loops, even with -O0?

Dan

Henning Thielemann wrote:

Today a student has shown me a program that consists of a large 'do' block
for the list monad. The program looks like

   do x1 - [0..3]
  x2 - [0..2]
  ...
  x600 - [0..5]
  guard (x1+x2+2*x3 = 0)
  ...
  return (x1,x2,,x600)

It was actually generated by another program. The results were:

 GHC-6.4 was not able to compile that program at all, because it stopped
because of memory exhaustion.
 GHC-6.8.2 finished compilation after two minutes but the program aborted
quickly because of a corrupt thunk identifier.
 GHC-6.10 not yet tested.
 Hugs-2006 executed the program without complaining and showed the first
result after a few seconds: (0,0,0,0,0,...,0).

Eventually the program must run on a Linux cluster with a not up-to-date
Linux kernel, that is, I suspect newer GHC versions cannot be used due to
the 'timer_create' problem. (At least, the 'cabal' executable that I
generated with a GHC-6.8.2 had this problem when running on the cluster
which reminded me on the problems with GHC-6.8 itself running on older Linux
kernels.)

Since the list monad sorts the variable values in lexicographic order
which is inappropriate for the considered problem, I recommended the use of
control-monad-omega. Luke, I hope this monad can cope with 600 variables.
:-)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




import Control.Monad(guard)

main = print z

z :: [[Int]]
z=   do x1  - [0..3]
x2  - [0..3]
x3  - [0..3]
x4  - [0..3]
x5  - [0..3]
x6  - [0..3]
x7  - [0..3]
x8  - [0..3]
x9  - [0..3]
x10 - [0..3]
x11 - [0..3]
x12 - [0..3]
x13 - [0..3]
x14 - [0..3]
x15 - [0..3]
x16 - [0..3]
x17 - [0..3]
x18 - [0..3]
x19 - [0..3]
x20 - [0..3]
x21 - [0..3]
x22 - [0..3]
x23 - [0..3]
x24 - [0..3]
x25 - [0..3]
x26 - [0..3]
x27 - [0..3]
x28 - [0..3]
x29 - [0..3]
x30 - [0..3]
x31 - [0..3]
x32 - [0..3]
x33 - [0..3]
x34 - [0..3]
x35 - [0..3]
x36 - [0..3]
x37 - [0..3]
x38 - [0..3]
x39 - [0..3]
x40 - [0..3]
x41 - [0..3]
x42 - [0..3]
x43 - [0..3]
x44 - [0..3]
x45 - [0..3]
x46 - [0..3]
x47 - [0..3]
x48 - [0..3]
x49 - [0..3]
x50 - [0..3]
x51 - [0..3]
x52 - [0..3]
x53 - [0..3]
x54 - [0..3]
x55 - [0..3]
x56 - [0..3]
x57 - [0..3]
x58 - [0..3]
x59 - [0..3]
x60 - [0..3]
x61 - [0..3]
x62 - [0..3]
x63 - [0..3]
x64 - [0..3]
x65 - [0..3]
x66 - [0..3]
x67 - [0..3]
x68 - [0..3]
x69 - [0..3]
x70 - [0..3]
x71 - [0..3]
x72 - [0..3]
x73 - [0..3]
x74 - [0..3]
x75 - [0..3]
x76 - [0..3]
x77 - [0..3]

Re: [Haskell-cafe] Hugs faster and more reliable than GHC for large list monad 'do' block

2009-08-06 Thread Dan Weston
I should clarify that this was done on an older kernel (bootstrapped 
from ghc 6.4 as Jeff Heard suggested), in case that makes any difference:


Main memory size: 7971 Mbytes
1 GenuineIntel Intel(R) Xeon(TM) CPU 3.40GHz processor
Swap Size: 2047 Mbytes
Num Processors: 1
Processor Type:   Intel(R) Xeon(TM) CPU 3.40GHz x64
Clock Speed: 3400 MHZ
OS: Linux 2.6.9-42.0.3.EL.spi
OS-VERSION: CentOS release 4.4 (Final)
OS-HW: x86_64

Dan Weston wrote:

No, I am using the latest released ghc:

  ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.4

[ z.hs is attached ]

  time ghc -O0 --make z.hs
[1 of 1] Compiling Main ( z.hs, z.o )
Linking z ...
14.422u 0.630s 0:15.10 99.6%0+0k 0+0io 0pf+0w

  time ./z
z: internal error: scavenge: unimplemented/strange closure type -1 @ 
0x2a95a8e000

 (GHC version 6.10.4 for x86_64_unknown_linux)
 Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
Abort
0.007u 0.007s 0:00.02 0.0%  0+0k 0+0io 0pf+0w

Dan

Neil Mitchell wrote:

Hi

I think the issue you're running in to with 6.4 is this one:
http://hackage.haskell.org/trac/ghc/ticket/830 - known and fixed a
while back.

Thanks

Neil

On Thu, Aug 6, 2009 at 9:59 PM, Dan Westonweston...@imageworks.com wrote:

I assume for the return line, you meant to return a list, not a tuple. ghc
doesn't support a 600-tuple.
In any case, returning a list, I have verified that this problem exists in
ghc 6.10.3, for -O0 and -O2.

For -O0, it compiles and links fine, but gives this runtime message:

z: internal error: scavenge: unimplemented/strange closure type -1 @
0x2a95a8e000
   (GHC version 6.10.3 for x86_64_unknown_linux)
   Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
Abort

Maybe it is attempting to unroll these loops, even with -O0?

Dan

Henning Thielemann wrote:

Today a student has shown me a program that consists of a large 'do' block
for the list monad. The program looks like

   do x1 - [0..3]
  x2 - [0..2]
  ...
  x600 - [0..5]
  guard (x1+x2+2*x3 = 0)
  ...
  return (x1,x2,,x600)

It was actually generated by another program. The results were:

 GHC-6.4 was not able to compile that program at all, because it stopped
because of memory exhaustion.
 GHC-6.8.2 finished compilation after two minutes but the program aborted
quickly because of a corrupt thunk identifier.
 GHC-6.10 not yet tested.
 Hugs-2006 executed the program without complaining and showed the first
result after a few seconds: (0,0,0,0,0,...,0).

Eventually the program must run on a Linux cluster with a not up-to-date
Linux kernel, that is, I suspect newer GHC versions cannot be used due to
the 'timer_create' problem. (At least, the 'cabal' executable that I
generated with a GHC-6.8.2 had this problem when running on the cluster
which reminded me on the problems with GHC-6.8 itself running on older Linux
kernels.)

Since the list monad sorts the variable values in lexicographic order
which is inappropriate for the considered problem, I recommended the use of
control-monad-omega. Luke, I hope this monad can cope with 600 variables.
:-)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking for an extention (functional incapsulation)

2009-08-06 Thread Dan Weston

Is there any good extension? Yes, it's in Control.Applicative.



Belka wrote:

Hello, cafe visitors! :)

This is a double topic:
1. Can't find any good informative resource with descriptions of Haskell
extensions. Could anybody please share good one if it exists?
The only good one I found:
http://hackage.haskell.org/trac/haskell-prime/wiki/HaskellExtensions
But it's a bit too old and not that full... 
I undestand, that Haskell is kind of boiling language, in a sense of being

neverending experiment. It develops all the time, extensions show up and
drop out. So it's not that easy to support community with a fresh
information about them. But on the other side, the property (of being
boiling language) makes such information really important for community
members... I think. :)

2. Consider situation:
---
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, sdtField2 ::
SDT_Field2Type}
sdtField3 :: SomeDataType - SDT_Field3Type
sdtField3 sdt = f (sdtField1 sdt) (sdtField2 sdt)
---
I induced recently, that it would be very comfortable if I could perform in
a way like this:
---
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, sdtField2 ::
SDT_Field2Type, sdtField3 :: SDT_Field2Type, sdtField3 = f sdtField1
sdtField2}
---
The situation is not that rare, when dealing with nonprimitive data
constructions. Moreover would be really comfortable to reduce
---
data SomeDataType = SomeDataType_111 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type} | SomeDataType_222 { sdtField1 ::
SDT_Field1Type, sdtField2 :: SDT_Field2Type, sdtField5 :: SDT_Field5Type}

sdtField3 :: SomeDataType - SDT_Field3Type
sdtField3 sdt = case sdt of {SomeDataType_111 - f (sdtField1 sdt)
(sdtField2 sdt) ; SomeDataType_222 - g (sdtField1 sdt) (sdtField2 sdt)
(sdtField5 sdt)}

\/ \/ \/ \/ \/ \/ \/ \/ \/ \/

data SomeDataType = SomeDataType_111 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type, sdtField3 :: SDT_Field3Type, sdtField3 = f
sdtField1 sdtField2} | SomeDataType_222 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type, sdtField5 :: SDT_Field5Type, sdtField3 ::
SDT_Field3Type, sdtField3 = g sdtField1 sdtField2 sdtField5}
---

Usable mechanics for realization would be:
1. Funtion similar to Data.Function.on (example: (*) `on` f = \x y - f x *
f y), but opposite - I called it under. 
t `under` f = \x y - (x f) `t` (y f)

2. currying and uncurrying

Is there any such extension?

Belka


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking for an extention (functional incapsulation)

2009-08-06 Thread Dan Weston

More specifically:

sdtField3 sdt = f $ sdtField1 * sdtField2

You don't really need this inline in the record syntax, do you?

Dan Weston wrote:

Is there any good extension? Yes, it's in Control.Applicative.



Belka wrote:

Hello, cafe visitors! :)

This is a double topic:
1. Can't find any good informative resource with descriptions of Haskell
extensions. Could anybody please share good one if it exists?
The only good one I found:
http://hackage.haskell.org/trac/haskell-prime/wiki/HaskellExtensions
But it's a bit too old and not that full... 
I undestand, that Haskell is kind of boiling language, in a sense of being

neverending experiment. It develops all the time, extensions show up and
drop out. So it's not that easy to support community with a fresh
information about them. But on the other side, the property (of being
boiling language) makes such information really important for community
members... I think. :)

2. Consider situation:
---
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, sdtField2 ::
SDT_Field2Type}
sdtField3 :: SomeDataType - SDT_Field3Type
sdtField3 sdt = f (sdtField1 sdt) (sdtField2 sdt)
---
I induced recently, that it would be very comfortable if I could perform in
a way like this:
---
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, sdtField2 ::
SDT_Field2Type, sdtField3 :: SDT_Field2Type, sdtField3 = f sdtField1
sdtField2}
---
The situation is not that rare, when dealing with nonprimitive data
constructions. Moreover would be really comfortable to reduce
---
data SomeDataType = SomeDataType_111 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type} | SomeDataType_222 { sdtField1 ::
SDT_Field1Type, sdtField2 :: SDT_Field2Type, sdtField5 :: SDT_Field5Type}

sdtField3 :: SomeDataType - SDT_Field3Type
sdtField3 sdt = case sdt of {SomeDataType_111 - f (sdtField1 sdt)
(sdtField2 sdt) ; SomeDataType_222 - g (sdtField1 sdt) (sdtField2 sdt)
(sdtField5 sdt)}

\/ \/ \/ \/ \/ \/ \/ \/ \/ \/

data SomeDataType = SomeDataType_111 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type, sdtField3 :: SDT_Field3Type, sdtField3 = f
sdtField1 sdtField2} | SomeDataType_222 { sdtField1 :: SDT_Field1Type,
sdtField2 :: SDT_Field2Type, sdtField5 :: SDT_Field5Type, sdtField3 ::
SDT_Field3Type, sdtField3 = g sdtField1 sdtField2 sdtField5}
---

Usable mechanics for realization would be:
1. Funtion similar to Data.Function.on (example: (*) `on` f = \x y - f x *
f y), but opposite - I called it under. 
t `under` f = \x y - (x f) `t` (y f)

2. currying and uncurrying

Is there any such extension?

Belka




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking for an extention (functional incapsulation)

2009-08-06 Thread Dan Weston


 the value grows exponentially with LOC (lines of code) count. :)

Exponentially? Now I'm missing something...

Your way has 156 chars:
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, sdtField2 ::
 SDT_Field2Type, sdtField3 :: SDT_Field2Type, sdtField3 = f sdtField1
 sdtField2}

This way has 162 chars:
data SomeDataType = SomeDataType { sdtField1 :: SDT_Field1Type, 
sdtField2 :: SDT_Field2Type}

sdtField3 :: SDT_Field2Type
sdtField3 = f $ sdtField1 * sdtField2

This adds 6 characters per dependent deconstructor function. As a 
fraction of total LOC, this is insignificant.


Belka wrote:

Thank you, for your reply, Dan! :)


You don't really need this inline in the record syntax, do you?

In fact, that was the point. To enclose direct functional dependants into
the record declaration. To achieve better pithiness - it's valuable, and the
value grows exponentially with LOC (lines of code) count. :)


sdtField3 sdt = f $ sdtField1 * sdtField2

Doesn't look much better than my under function (t `under` f = \x y - (x
f) `t` (y f)). What did I miss?
I believe, there are good reasons to use Control.Applicative for lots
purposes, but unfortunately, yet haven't had time to try it in my practice.

Belka


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec: using two different parser for the same string

2009-08-05 Thread Dan Weston

I think parsecMap does the job here:

---
import Text.ParserCombinators.Parsec hiding ((|))
import Text.Parsec.Prim(parsecMap)
import Control.Applicative((|))
import Control.Arrow((|||),())

-- Tagged (:)
() :: Either Char Char - Either String String - Either String String
Left  a  Left  b = Left  (a:b)
Left  a  Right b = Left  (a:b)
Right a  Left  b = Left  (a:b)
Right a  Right b = Right (a:b)

-- Tagged concat
stringParser :: [Either Char Char] - Either String String
stringParser = foldr () (Right )

-- Parse Integer if properly tagged, keeping unparsed string
maybeToInteger :: Either String String - (Maybe Integer, String)
maybeToInteger = (const Nothing ||| Just . read)  (id ||| id)

-- Tagged-choice parser
intOrStringParser = parsecMap (maybeToInteger . stringParser)
  $ many1 (parsecMap Right digit | parsecMap Left (noneOf ;)))

-- Parse between parentheses
intOrStringListParser = between (char '(')
(char ')')
(sepBy1 intOrStringParser (char ';'))
---

Then you get a tagged version of each string, along with the string itself:

*P parseTest intOrStringListParser $ (1;2w4;8;85)
[(Just 1,1),(Nothing,2w4),(Just 8,8),(Just 85,85)]

There may be some parsecMap-fold fusion optimization possible, though I 
haven't looked into that.


Dan

Paul Sujkov wrote:

Hi everybody,

suppose I have two different parsers: one just reads the string, and 
another one parses some values from it. E.g.:


parseIntList :: Parser [Integer]
parseIntList = do
  char '('
  res - liftM (map read) (sepBy1 (many1 digit) (char ';'))
  char ')'
  return res

parseIntString :: Parser String
parseIntString = manyTill anyChar eof

so for some input like this - (1;2;3;4) - I will have two different 
result:


*Parlog parseTest parseIntList (1;2;3;4)
[1,2,3,4]
*Parlog parseTest parseIntString (1;2;3;4)
(1;2;3;4)

but the thing that I actually want is something like Parser ([Integer], 
String) - results from both parsers at a time, no matter whether one of 
them fails or not:


*Parlog parseTest parseIntListAndString (1;2;3;4)
([1,2,3,4], (1;2;3;4))

it is impossible at first sight, because first parser to use will 
consume all the input, and there will be nothing to parse for the second one


Parsec contains choice function, but it is implemented via | and 
that is mplus - so it tries second alternative only if the first one 
fails. Is it possible to use two parsers for the same string (with 
try-like backtracking, no input actually consumed till the second parser 
finishes)? I can assume only dirty hacks with the GenParser internals - 
manual position storing and backtracking - but that is obviously not good


however, my first attempt to solve the problem was kind a like that: to 
parse string to String, and then to use it as an input for the next 
level parse call:


parseIntListAndString :: Parser ([Integer], String)
parseIntListAndString = do
  str - parseIntString
  return (res str, str)
  where res str = case (parse parseIntList  str) of
Left  err - []
Right val - val

but the problems with such a method began when I switched from Parser to 
GenParser with user state: function parseIntList have to update the 
state, but it can't have the same state as the parseIntListAndString any 
more: it has it's own. I can explicitly pass the state from 
parseIntListAndString to parseIntList, but I see no suitable way for the 
parseIntList to update it. I can return the updated state value from the 
parseIntList function, and call setState on a result - but it seems 
rather ugly to mee. However, if nothing else will do, that is an alternative


it is of course possible to use two different parsers sequentially, but 
it is also very ineffective: I need to use such multiple parsing on a 
relatively small substring of the actual input, so little backtracking 
would be a much nicier approach. Any suggestions?


--
Regards, Paul Sujkov



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec: using two different parser for the same string

2009-08-05 Thread Dan Weston
Of course, since ParsecT s u m is a functor, feel free to use fmap 
instead of parsecMap. Then you don't need to import from Text.Parsec.Prim.
And in hindsight, I might prefer the name (:) or cons to () for the 
first function, but now I'm just obsessing. :)


Dan

Dan Weston wrote:

I think parsecMap does the job here:

---
import Text.ParserCombinators.Parsec hiding ((|))
import Text.Parsec.Prim(parsecMap)
import Control.Applicative((|))
import Control.Arrow((|||),())

-- Tagged (:)
() :: Either Char Char - Either String String - Either String String
Left  a  Left  b = Left  (a:b)
Left  a  Right b = Left  (a:b)
Right a  Left  b = Left  (a:b)
Right a  Right b = Right (a:b)

-- Tagged concat
stringParser :: [Either Char Char] - Either String String
stringParser = foldr () (Right )

-- Parse Integer if properly tagged, keeping unparsed string
maybeToInteger :: Either String String - (Maybe Integer, String)
maybeToInteger = (const Nothing ||| Just . read)  (id ||| id)

-- Tagged-choice parser
intOrStringParser = parsecMap (maybeToInteger . stringParser)
   $ many1 (parsecMap Right digit | parsecMap Left (noneOf ;)))

-- Parse between parentheses
intOrStringListParser = between (char '(')
 (char ')')
 (sepBy1 intOrStringParser (char ';'))
---

Then you get a tagged version of each string, along with the string itself:

*P parseTest intOrStringListParser $ (1;2w4;8;85)
[(Just 1,1),(Nothing,2w4),(Just 8,8),(Just 85,85)]

There may be some parsecMap-fold fusion optimization possible, though I 
haven't looked into that.


Dan

Paul Sujkov wrote:

Hi everybody,

suppose I have two different parsers: one just reads the string, and 
another one parses some values from it. E.g.:


parseIntList :: Parser [Integer]
parseIntList = do
  char '('
  res - liftM (map read) (sepBy1 (many1 digit) (char ';'))
  char ')'
  return res

parseIntString :: Parser String
parseIntString = manyTill anyChar eof

so for some input like this - (1;2;3;4) - I will have two different 
result:


*Parlog parseTest parseIntList (1;2;3;4)
[1,2,3,4]
*Parlog parseTest parseIntString (1;2;3;4)
(1;2;3;4)

but the thing that I actually want is something like Parser ([Integer], 
String) - results from both parsers at a time, no matter whether one of 
them fails or not:


*Parlog parseTest parseIntListAndString (1;2;3;4)
([1,2,3,4], (1;2;3;4))

it is impossible at first sight, because first parser to use will 
consume all the input, and there will be nothing to parse for the second one


Parsec contains choice function, but it is implemented via | and 
that is mplus - so it tries second alternative only if the first one 
fails. Is it possible to use two parsers for the same string (with 
try-like backtracking, no input actually consumed till the second parser 
finishes)? I can assume only dirty hacks with the GenParser internals - 
manual position storing and backtracking - but that is obviously not good


however, my first attempt to solve the problem was kind a like that: to 
parse string to String, and then to use it as an input for the next 
level parse call:


parseIntListAndString :: Parser ([Integer], String)
parseIntListAndString = do
  str - parseIntString
  return (res str, str)
  where res str = case (parse parseIntList  str) of
Left  err - []
Right val - val

but the problems with such a method began when I switched from Parser to 
GenParser with user state: function parseIntList have to update the 
state, but it can't have the same state as the parseIntListAndString any 
more: it has it's own. I can explicitly pass the state from 
parseIntListAndString to parseIntList, but I see no suitable way for the 
parseIntList to update it. I can return the updated state value from the 
parseIntList function, and call setState on a result - but it seems 
rather ugly to mee. However, if nothing else will do, that is an alternative


it is of course possible to use two different parsers sequentially, but 
it is also very ineffective: I need to use such multiple parsing on a 
relatively small substring of the actual input, so little backtracking 
would be a much nicier approach. Any suggestions?


--
Regards, Paul Sujkov





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Importing Control.Arrow changes inferred type of (m = f) x in ghci

2009-07-27 Thread Dan Weston
The following inferred type has a constraint that can be trivially 
satisfied, but isn't:


Control.Monad :t \ (m,f,x) - (m = f) x
\ (m,f,x) - (m = f) x
  :: forall t a b. (Monad ((-) t)) = (t - a, a - t - b, t) - b

-- In Control.Monad there is forall t. instance Monad ((-) t),
-- so why is the vacuous Monad constraint still there?
-- Nor can I remove it with a type annotation:

Control.Monad :t \ (m,f,x) - (m = f) x :: forall t a b. (t - a, a 
- t - b, t) - b


interactive:1:13:
Inferred type is less polymorphic than expected
[snip]

-- Then if I just import a module:
Control.Monad :m + Control.Arrow

-- Now, the Monad ((- t) constraint disappears:

Control.Monad Control.Arrow :t \ (m,f,x) - (m = f) x
\ (m,f,x) - (m = f) x
  :: forall t a b. (t - a, a - t - b, t) - b

-- Although it still will not accept an explicit type annotation:
Control.Monad Control.Arrow :t \ (m,f,x) - (m = f) x :: forall t a 
b. (t - a, a - t - b, t) - b


interactive:1:13:
Inferred type is less polymorphic than expected
   [snip]


Is there some explicit kind annotation I can/should give to get the most 
general type?


Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implicit concatenation in list comprehensions

2009-07-21 Thread Dan Weston

Bulat Ziganshin wrote:

Hello Neil,

Tuesday, July 21, 2009, 1:26:55 PM, you wrote:


 ++ [ -i  | not (null (ghcOptSearchPath opts)) ]
 ++ [ -i, dir | dir - ghcOptSearchPath opts ]



Following the discussions, I now support this extension too - I keep
seeing more and more places in my code where it would be very useful.



 ++[ -i| not (null (ghcOptSearchPath opts)) ]
 ++ concat [ [-i, dir] | dir - ghcOptSearchPath opts ]


 [a   | c ] = concat $ do { c; return [a] }
 [a,b | c ]  = concat $ do { c; return [a,b] }

This would mean that

 [   | c ] = concat $ do { c; return [] }

The right is legal Haskell and gives []. The left is (not yet) legal. 
Should it be?


Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is there no Zippable class? Would this work?

2009-07-16 Thread Dan Weston
After rereading page 2 of McBride and Paterson's Functional Pearl, 
Applicative programming with effects, I think you are just reinventing 
Control.Applicative. The problem is that the default Applicative 
instance for [] is wrong, being a direct product rather than a direct sum.


If [] were not already an instance of Applicative, you could easily 
define it as:


import Control.Applicative

data MyList a = Nil | (:::) a (MyList a) deriving (Read,Show,Eq,Ord)
infixr 5 :::

-- same as []
instance Functor MyList where
  fmap f Nil = Nil
  fmap f (x ::: xs) = f x ::: fmap f xs

-- different from [], sum rather than product
instance Applicative MyList where
  pure x = x ::: Nil
  (*) (f ::: fs) (x ::: xs) = f x ::: (fs * xs)
  (*) _ _ = Nil

x = (1::Int) ::: 3 ::: 5 ::: 7 ::: 3 ::: Nil
y = (6::Int) ::: 9 ::: 3 ::: 1 ::: 4 ::: Nil
z = (2::Int) ::: 4 ::: 0 ::: 8 ::: 2 ::: Nil

test = (,,) $ x * y * z

 test
(:::) (1,6,2) ((:::) (3,9,4) ((:::) (5,3,0) ((:::) (7,1,8) ((:::) 
(3,4,2) Nil


Alternately, you could write a newtype for [] and give it the zippy 
instance for Applicative.


Job Vranish wrote:
I was needing a way to zip generic data structures together today and 
was very annoyed to find that there is no Zippable class, or variant 
there of.


So I made my own:

class (Foldable f, Functor f) = Zippable f where
  fmaps :: (Foldable g) = g (a - b) - f a - f b
  fmaps' :: [a - b] - f a - f b -- to save a step on instance 
implementation

  zipWith :: (a - b - c) - f a - f b - f c
  zip ::  f a - f b - f (a, b)
  unzip :: f (a, b) - (f a, f b)
 
  fmaps fs a = fmaps' (toList fs) a

  fmaps' fs a = fmaps fs a
  zipWith f a b = fmaps (fmap f a) b
  zip = zipWith (,)
  unzip a = (fmap fst a, fmap snd a)
 
instance Zippable [] where

  fmaps' (fx:fs) (x:xs) = fx x : fmaps' fs xs
  fmaps' _   _  = []
 
--The fmaps function is also quite handy as a replacment for zipWith3, 
zipWith4, etc...

--For example:
 
x = [1, 3, 5, 7, 3]

y = [6, 9, 3, 1, 4]
z = [2, 4, 0, 8, 2]
test = fmap (,,) x `fmaps` y `fmaps` z
--  [(1,6,2),(3,9,4),(5,3,0),(7,1,8),(3,4,2)]

--you can also throw in a functor instance to remove the dependency on 
the Functor class, but it

--  might not be worth it:
instance (Zippable f) = Functor f where
  fmap f a = fmaps (repeat f) a


Is there any good reason that there isn't something like this in the 
standard libraries? Or, as far as I can tell, on hackage?

If not, then maybe I'll stick it on hackage.

- Job Vranish




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is there no Zippable class? Would this work?

2009-07-16 Thread Dan Weston

Way cool. I have gained newfound respect for what I don't know. :)

Can there ever be more than one (observably different) valid definition 
of pure for a given * that obeys all the laws? I would imagine that 
there could be at most one.


Dan

Ryan Ingram wrote:

(I'm going to play fast and loose with constructors for this post,
treating MyList and ZipList as if they were [])

On Thu, Jul 16, 2009 at 4:10 PM, Dan Westonweston...@imageworks.com wrote:

-- different from [], sum rather than product
instance Applicative MyList where
 pure x = x ::: Nil
 (*) (f ::: fs) (x ::: xs) = f x ::: (fs * xs)
 (*) _ _ = Nil


Unfortunately, this instance doesn't fulfill this Applicative law:
 pure id * f = f

pure id * [1,2,3]
= [id] * [1,2,3]
= [id 1]
= [1]

Fortunately, the solution already exists in Control.Applicative:


-- | Lists, but with an 'Applicative' functor based on zipping, so that
--
-- @f '$' 'ZipList' xs1 '*' ... '*' 'ZipList' xsn = 'ZipList' (zipWithn f 
xs1 ... xsn)@
--
newtype ZipList a = ZipList { getZipList :: [a] }

instance Functor ZipList where
fmap f (ZipList xs) = ZipList (map f xs)

instance Applicative ZipList where
pure x = ZipList (repeat x)
ZipList fs * ZipList xs = ZipList (zipWith id fs xs)


In this case:

pure id * [1,2,3]
= [id, id, ...] * [1,2,3]
= [id 1, id 2, id 3]
= [1,2,3]

  -- ryan




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] homomorphic encryption and monads?

2009-07-15 Thread Dan Weston

I think there may be a problem here.

Homomorphic encryption is a form of encryption where one can perform a 
specific algebraic operation on the plaintext by performing a (possibly 
different) algebraic operation on the ciphertext. 


The word specific means that the functor is discrete, not continuous. 
Only Integer can be encoded. Also, the arrow mapping is partial: fmap 
does not accept arbitrary any (a - b) but only a natural transformation 
pair (in,out).


That would make Encryption an indexed arrow, something like

class Arrow a = In a b where
  in :: a b Integer

class Arrow a = Out a c where
  out :: a Integer c

instance (In a b, Out a c) = Arrow a b c where
 arr f = in  f  out

Dan

Max Rabkin wrote:

On Wed, Jul 15, 2009 at 11:54 PM, Jason Dagitda...@codersbase.com wrote:

Hello,

I have just a very vague understanding of what homomorphic encryption is,
but I was wondering if homomorphic encryption behaves as a one-way monad
(like IO).


An interesting idea. Let's see where this leads.


I could be mistaken about the way this works, but it seems that isSpam can't
return a plain 'Bool' because then you could know things about the encrypted
data without having to decrypt it first.  So that is why I think it has to
return Cypher Bool.


Yes, this is the idea behind homomorphic encryption: you can do some
work on an encrypted input, and get an encrypted output.

Your categorical spidey-sense should be tingling right now, and indeed
it did, but you interpreted it wrong (but it's a common trap)


And because it's a homomorphic encryption, you could have something like
doWork:
doWork :: Cypher a - (a - b) - Cypher b


Looks good. This type should send your spidey-sense into the red.


Which we could use to implement isSpam:

isSpam s = doWork s spamTest
  where spamTest :: String - Bool

Thinking about it a bit more, since we never have to decrypt the
data to work with it, it seems that (a - b) is wrong as well, because we
don't have the key or whatever to do the decrypting.


(a - b) is not wrong. Homomorphic encryption gives you exactly this
*magic* function that takes an ordinary f :: a - b, and applies it to
a Cypher a, giving you a Cypher b. No deciphering happens. The
function get lifted/mapped into Cypher.


So, then it seems reasonable that the only type for doWork is this:
doWork :: Cypher a - (Cypher a - Cypher b) - Cypher b

Which doesn't really seem all that useful now.


Indeed. That is just (a restricted version of) the type of 'flip ($)',
a rather uninteresting (though not useless) function.


On the other hand, maybe we have an algorithm which can take a function on
normal values and gives us a function on Cypher values:

enCypher :: (a - b) - Cypher a - Cypher b


This is exactly what you have. This is just the flipped version of
your first doWork.

And this function is better known as fmap. Cypher is a Functor.

Because they have special syntax, are widely used in IO, and have a
scary name (and perhaps for other, better reasons too), Monads seem to
attract special attention.

Functor and Applicative get much less love, but both are valuable and
interesting typeclasses (they form a hierarchy: every monad is an
applicative functor, and ever applicative functor is a functor). And
hopefully your spidey-sense now has a Functor-detector :)

I was planning to show that Cypher can't be a monad or an applicative
functor, but their seems to be a hole in my thinking. Hopefully I can
fix it, and hopefully everything I've said up to now has been correct.

--Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Dan Weston
I think the structure you are looking for is called a wedge sum [1], 
which is the coproduct in the category of the pointed spaces, each of 
which is (in this case) the group action of changing one letter to 
another in the ith position of a word of fixed length.


One small tricky part is that, in contrast to the direct product of n 
1-D spaces with a list comprehension, which enumerates the product space 
with duplicates:


[(x,y,z,...) | x - xs, y - ys, z - zs, ... ]

with a wedge sum a naive algorithm overcounts the point (or origin, in 
this case the identity function). This can be seen in your transition 
function, where non-identity transformations are counted only once, but 
the identity transformation is counted n times:


*Main length . filter (== abd) . transition $ abc
1
*Main length . filter (== abc) . transition $ abc
3

If you want your result to be a set, you may want to treat the identity 
transformation separately.


[1] http://en.wikipedia.org/wiki/Wedge_sum

Dan

Matthew wrote:

Today, I was working on coding a solver for the game doublets.
It's a word game where you transform the start word into the end
word one letter at a time (and the intermediate states must also
be valid words).  For example, one solution to (warm, cold) is

[warm, ward, card, cord, cold]

So, one of my first goals was to write a function that would generate
the possible words a given word could transition to.  Here's a simple
recursive implementation:

transition :: [Char] - [[Char]]
transition [] = []
transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

For some reason, I find this formulation to be strangely unsatisfying.
It seems to me that this sort of computation -- i.e. modifying each
element of a container (but only one at a time) -- should be
expressible with some higher order function.  In a nutshell, what I'm
looking for would be a function, say each with this type.

each :: (Container c) = (a - a) - c a - c (c a)

I'm not particularly sure what Class to substitute for Container.
Comonad seemed like a promising solution initially, and provides
the basis for my current implementation of the doublets solver.

The problem being that cobind (extend) takes a function of type (w a - 
  a)

instead of (a - a).  I suspect that there may be a clever way to write
each by using liftW in combination with .  or something like that,
but presently, I'm at a loss.

Has anyone else encountered this sort of abstraction before.  I'd love
to hear your ideas about what Classes each should support, and
how to implement it.


Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Dan Weston
Oops. Make that: a list comprehension, which enumerates the product 
space *without* duplicates!


Dan Weston wrote:
I think the structure you are looking for is called a wedge sum [1], 
which is the coproduct in the category of the pointed spaces, each of 
which is (in this case) the group action of changing one letter to 
another in the ith position of a word of fixed length.


One small tricky part is that, in contrast to the direct product of n 
1-D spaces with a list comprehension, which enumerates the product space 
with duplicates:


[(x,y,z,...) | x - xs, y - ys, z - zs, ... ]

with a wedge sum a naive algorithm overcounts the point (or origin, in 
this case the identity function). This can be seen in your transition 
function, where non-identity transformations are counted only once, but 
the identity transformation is counted n times:


*Main length . filter (== abd) . transition $ abc
1
*Main length . filter (== abc) . transition $ abc
3

If you want your result to be a set, you may want to treat the identity 
transformation separately.


[1] http://en.wikipedia.org/wiki/Wedge_sum

Dan

Matthew wrote:

Today, I was working on coding a solver for the game doublets.
It's a word game where you transform the start word into the end
word one letter at a time (and the intermediate states must also
be valid words).  For example, one solution to (warm, cold) is

[warm, ward, card, cord, cold]

So, one of my first goals was to write a function that would generate
the possible words a given word could transition to.  Here's a simple
recursive implementation:

transition :: [Char] - [[Char]]
transition [] = []
transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

For some reason, I find this formulation to be strangely unsatisfying.
It seems to me that this sort of computation -- i.e. modifying each
element of a container (but only one at a time) -- should be
expressible with some higher order function.  In a nutshell, what I'm
looking for would be a function, say each with this type.

each :: (Container c) = (a - a) - c a - c (c a)

I'm not particularly sure what Class to substitute for Container.
Comonad seemed like a promising solution initially, and provides
the basis for my current implementation of the doublets solver.

The problem being that cobind (extend) takes a function of type (w a - 
  a)

instead of (a - a).  I suspect that there may be a clever way to write
each by using liftW in combination with .  or something like that,
but presently, I'm at a loss.

Has anyone else encountered this sort of abstraction before.  I'd love
to hear your ideas about what Classes each should support, and
how to implement it.


Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Non Empty List?

2009-06-04 Thread Dan Weston

Unless I'm missing something in your description, why not

  data Container a = Single a | Many a a [a]

Dan

GüŸnther Schmidt wrote:

Hi Jake,


Jake McArthur schrieb:

GüŸnther Schmidt wrote:
data Container a = Single a | Many a [a]  
but the problem above is that the data structure would allow to 
construct a Many 5 [] :: Container Int.

I think you meant to do either

data Container a = Single a | Many a (Container a)




nope, I pretty much meant what I wrote above, the solution here would 
mean deeply nested containers, since it's a recursive data structure.


I need a data structure as in my example without the [] being possible 
to be empty.


It's quite possible that in order to achieve this I would need to split 
this in 2 separate data declarations.


The idea behind this is that an a can pocket siblings, but only one 
level deep and that an a's list of pocketed/swallowed siblings must 
not be empty, because otherwise it would automatically be an Single a.


Sorry, I really don't know how to put this better.

Günther


or

data Container a = Container a [a]

- Jake



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hugs vs. GHCi

2009-05-29 Thread Dan Weston

http://haskell.org/onlinereport/exps.html#sect3.12

Pattern bindings are matched lazily; an implicit ~ makes these patterns 
irrefutable. For example,


let (x,y) = undefined in e

does not cause an execution-time error until x or y is evaluated.

So GHCi is correct.

Dan

Vladimir Reshetnikov wrote:

Hi,

The following expression evaluates to 1 in GHCi, but results in an
error in Hugs:

let f x = let g y = [x,y] in (g 1, g []) in 1

What is the correct behavior?

Thanks
Vladimir
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] name for monad-like structure?

2009-04-28 Thread Dan Weston

I suspect your structure doesn't exist.

A Kleisli algebra (a - m b) has a full subalgebra (() - m ()), but (() 
- m b) is not an algebra (it is not closed).


I'm guessing that the largest proper subset of (a - m b) is just
(() - m ()).

Dan

Tony Morris wrote:

Michael Vanier wrote:

I've stumbled upon a structure that is like a weaker version of a
monad, one that supports return and  but not =.  Has anyone seen
this before, and if so, does it have a standard name?

Mike


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Are you sure it supports
() :: m a - m b - m b

and not
mplus :: m a - m a - m a ?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Overriding a Prelude function?

2009-04-23 Thread Dan Weston

*Main :t rollDie ~ (rollDie ~ rollDie)
rollDie ~ (rollDie ~ rollDie) :: Seed - (Int, Seed)

This is a function. How exactly do you want ghci to show it? When you 
figure that out, feel free to make an instance of Show for it.


Meanwhile, you can just apply the function to a Seed value and see what 
happens:


*Main rollDie ~ rollDie ~ rollDie $ 6
(1,1145965850)

michael rice wrote:
OK, I changed the operator from () to (~). When I try to use it I 
get this:


[mich...@localhost ~]$ ghci rand
GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
[1 of 1] Compiling Main ( rand.hs, interpreted )
Ok, modules loaded: Main.
*Main rollDie ~ (rollDie ~ rollDie)

interactive:1:0:
No instance for (Show (Seed - (Int, Seed)))
  arising from a use of `print' at interactive:1:0-32
Possible fix:
  add an instance declaration for (Show (Seed - (Int, Seed)))
In a stmt of a 'do' expression: print it
*Main

Michael


--- On *Wed, 4/22/09, Luke Palmer /lrpal...@gmail.com/* wrote:


From: Luke Palmer lrpal...@gmail.com
Subject: Re: [Haskell-cafe] Overriding a Prelude function?
To: michael rice nowg...@yahoo.com
Cc: Ross Mellgren rmm-hask...@z.odi.ac, Dan Weston
weston...@imageworks.com, haskell-cafe@haskell.org
haskell-cafe@haskell.org
Date: Wednesday, April 22, 2009, 5:02 PM

On Wed, Apr 22, 2009 at 1:47 PM, michael rice nowg...@yahoo.com
/mc/compose?to=nowg...@yahoo.com wrote:

Here's what I get:

[mich...@localhost ~]$ ghci
GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude import Prelude hiding (())


You know, to avoid this nonsense you could just name the operator
something else, like ~, or ~, or $...@**!.  Operators are just names.

Luke
 




interactive:1:0: parse error on input `import'
Prelude

=

I was passing seed0 to rollDie and getting back (r1,seed1)
 passing seed1 to rollDie and getting back (r2,seed2)
 passing seed2 to rollDie and getting back (r3,seed3)

Just based on the problem text, I would guess that
passing rollDie and seed0 to () I would get back
(r3,seed3),
losing the intermediate random numbers r1 and r2 along the way, at
least that's what I understood it to say.

So, I know that next I'm probably going to have to do something to
remedy that, but I haven't gotten to that next step yet. What is
unsugar?

Thanks in advance for your patience.

Michael


--- On *Wed, 4/22/09, Dan Weston /weston...@imageworks.com
/mc/compose?to=weston...@imageworks.com/* wrote:


From: Dan Weston weston...@imageworks.com
/mc/compose?to=weston...@imageworks.com
Subject: Re: [Haskell-cafe] Overriding a Prelude function?
To: Ross Mellgren rmm-hask...@z.odi.ac
/mc/compose?to=rmm-hask...@z.odi.ac
Cc: michael rice nowg...@yahoo.com
/mc/compose?to=nowg...@yahoo.com,
haskell-cafe@haskell.org
/mc/compose?to=haskell-c...@haskell.org
haskell-cafe@haskell.org
/mc/compose?to=haskell-c...@haskell.org
Date: Wednesday, April 22, 2009, 12:37 PM

Be aware that the do unsugars to (Prelude.), not your
(), even if you hide (Prelude.):

import Prelude hiding (())
m  f = error Call me!
main = putStrLn . show $ do [3,4]
[5]

The desugaring of the do { [3,4]; [5] } is (Prelude.)
[3,4] [5] = [5,5], whereas you might have hoped for [3,4] 
[5] = error Call me!

Dan

Ross Mellgren wrote:
  I think
 
  import Prelude hiding (())
 
  does that.
 
  -Ross
 
  On Apr 22, 2009, at 11:44 AM, michael rice wrote:
 
  I've been working through this example from:
http://en.wikibooks.org/wiki/Haskell/Understanding_monads
 
  I understand what they're doing all the way up to the
definition of (), which duplicates Prelude function ().
To continue following the example, I need to know how to
override the Prelude () with the () definition in my
file rand.hs.
 
  Michael
 
  ==
 
  [mich...@localhost ~]$ cat rand.hs
  import System.Random

Re: [Haskell-cafe] Overriding a Prelude function?

2009-04-22 Thread Dan Weston
Be aware that the do unsugars to (Prelude.), not your (), even if 
you hide (Prelude.):


import Prelude hiding (())
m  f = error Call me!
main = putStrLn . show $ do [3,4]
[5]

The desugaring of the do { [3,4]; [5] } is (Prelude.) [3,4] [5] = 
[5,5], whereas you might have hoped for [3,4]  [5] = error Call me!


Dan

Ross Mellgren wrote:

I think

import Prelude hiding (())

does that.

-Ross

On Apr 22, 2009, at 11:44 AM, michael rice wrote:

I've been working through this example from: 
http://en.wikibooks.org/wiki/Haskell/Understanding_monads


I understand what they're doing all the way up to the definition of 
(), which duplicates Prelude function (). To continue following 
the example, I need to know how to override the Prelude () with the 
() definition in my file rand.hs.


Michael

==

[mich...@localhost ~]$ cat rand.hs
import System.Random

type Seed = Int

randomNext :: Seed - Seed
randomNext rand = if newRand  0 then newRand else newRand + 2147483647
where newRand = 16807 * lo - 2836 * hi
  (hi,lo) = rand `divMod` 127773

toDieRoll :: Seed - Int
toDieRoll seed = (seed `mod` 6) + 1

rollDie :: Seed - (Int, Seed)
rollDie seed = ((seed `mod` 6) + 1, randomNext seed)

sumTwoDice :: Seed - (Int, Seed)
sumTwoDice seed0 =
  let (die1, seed1) = rollDie seed0
  (die2, seed2) = rollDie seed1
  in (die1 + die2, seed2)

() m n = \seed0 -
  let (result1, seed1) = m seed0
  (result2, seed2) = n seed1
  in (result2, seed2)

[mich...@localhost ~]$


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functor and Haskell

2009-04-21 Thread Dan Weston
You are on the right track. The usual construction is that Hask is the 
category (with types as objects and functions as morphisms).


Functor F is then an endofunctor taking Hask to itself:

a - F a
f - fmap f

So, for F = []:

a - [a]
f - map f

Natural transformations are then any fully polymorphic (no context) 
unary function. The polymorphism is what makes them natural, since there 
is no method to treat one object (type) different from another.


Daryoush Mehrtash wrote:
In category theory functors are defined between two category of C and D 
where every object and morphism from C is mapped to D. 



I am trying to make sense of the above definition with functor class in 
Haskell.   Let say I am dealing with List type.  When I define List to 
be a instance of a functor I am saying the source category (C) is 
Haskell types and the destination category is List (D).In this the 
fmap is implementation of the mapping between every morphism in my 
Haskell Categroy (C) to morphism in  List cataegory (D).   With type 
constructor I also have the mapping of types (objects in Haskell 
Category, or my source cataegroy C) to List category (D).  So my functor 
in the catarogy sense is actually the fmap and type constructor.  Am I 
remotely correct?


If this is correct With this example, then can you then help me 
understand the transformation between functors and natural 
transformations? Specifically, what does it means to have two 
different functors between Haskell cateogry and List category? 


Thanks,

Daryoush





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functor and Haskell

2009-04-21 Thread Dan Weston
List is not a full subcategory of Hask, so it's a bad choice. Namely, 
types of functions on a list (e.g. [a] - [a]) are not themselves lists 
of type [b] for some b, and so are not objects of List (though they are 
morphisms in it). In Hask, ([a] - [a]) is both an object and a morphism 
(in fact, there is a unique object in Hask for every morphism).


It turns out (I believe) that the smallest subcategory that fits the 
Haskell Functor type class is Hask itself. That doesn't mean that the 
functor is epic: (Int - Int) is not the type of (fmap f) for any f.


So C and D are the same category: Hask.

Take a look at:
http://en.wikibooks.org/wiki/Haskell/Category_theory#Functors

Daryoush Mehrtash wrote:

I am not sure I follow how the endofunctor gave me the 2nd functor.

As I read the transformation there are two catagories C and D and two 
functors F and G between the same two catagories.  My problem is that I 
only have one functor between the Hask and List catagories.  So where 
does the 2nd functor come into picture that also maps between the same C 
and D catagories?


Thanks

Daryoush



On Tue, Apr 21, 2009 at 4:01 PM, Dan Weston weston...@imageworks.com 
mailto:weston...@imageworks.com wrote:


You are on the right track. The usual construction is that Hask is
the category (with types as objects and functions as morphisms).

Functor F is then an endofunctor taking Hask to itself:

a - F a
f - fmap f

So, for F = []:

a - [a]
f - map f

Natural transformations are then any fully polymorphic (no context)
unary function. The polymorphism is what makes them natural, since
there is no method to treat one object (type) different from another.


Daryoush Mehrtash wrote:

In category theory functors are defined between two category of
C and D where every object and morphism from C is mapped to D.

I am trying to make sense of the above definition with functor
class in Haskell.   Let say I am dealing with List type.  When I
define List to be a instance of a functor I am saying the source
category (C) is Haskell types and the destination category is
List (D).In this the fmap is implementation of the mapping
between every morphism in my Haskell Categroy (C) to morphism in
 List cataegory (D).   With type constructor I also have the
mapping of types (objects in Haskell Category, or my source
cataegroy C) to List category (D).  So my functor in the
catarogy sense is actually the fmap and type constructor.  Am I
remotely correct?

If this is correct With this example, then can you then help
me understand the transformation between functors and natural
transformations? Specifically, what does it means to have
two different functors between Haskell cateogry and List category?
Thanks,

Daryoush







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for the fastest Haskell primes algorithm

2009-04-16 Thread Dan Weston
Unless primesUpTo n goes from highest to lowest prime (ending in 2), I 
don't see how sharing is possible (in either space or time) between 
primesUpTo for different n.


Is it intended that the primes should therefore be listed in descending 
order?


a...@spamcop.net wrote:

primes :: [Integer]

isn't as useful as you might think for a library, because it must, by
definition, leak an uncontrolled amount of memory.  This:

primesUpTo :: Integer - [Integer]

is a better interface in that respect.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] abou the Godel Numbering for untyped lambda calculus

2009-03-31 Thread Dan Weston
DISCLAIMER: The last I studied this was over 20 years ago, so my brain 
is seriously rusty. You are warned...


 you mean the godel function (denoted by # and generally will be in type,
 #:term - number ?) is a kind of functions instead of a particular
 function?

No. The Godel mirror function (as defined by Godel) is a single 
function, the product of primes (whose ordering gives the position in 
the string) with exponents (giving the symbol at that position). 
However, I think any total injective function with partial injective 
inverse will do. Don't get hung up on the details of the mirroring 
(Godel doesn't). Once you accept that it can be done, there is no need 
to actually do it.


There is nothing magic about integers, beyond the fact that elementary 
arithmetic is well understood by humans and should not have 
inconsistencies (it probably doesn't, but we can't prove it within any 
language more intuitive than arithmetic, so why should be believe the 
proof in that language?). Importantly, all but the last step of the 
proof can be encoded as primitive recursive functions, meaning that they 
always halt within a predetermined number of steps (satisfying the 
Finitists). Only the last step cannot be so limited. This is a clear 
example where non-finite mathematics has demonstrably more power in a 
fundamentally important way.


Why bother going through this encoding? The main reason for the encoding 
is to overcome the fundamental limitation of structural induction 
whereby there is no this smaller than this. I.e. quoting a sentence 
only makes it longer. With functions on integers, this limitation is 
removed, so that although there is no string saying This statement is 
not true, (since This statement != \This statement is not true\), 
there *is* some function taking the Godel number of the unquoted This to 
the Godel number of the string to which it refers. Doing so takes some 
bit of cleverness, but the original proof in German is only about 8 
pages long, as I recall.


If we don't go with mirroring and just use strings, we need to introduce 
some equivalent trick, namely named functions. In a language without 
pointers and only lambda expressions, there is no way to refer to 
oneself without repeating the quote in infinite regression. For instance 
\x - 333 is longer than 33. You cannot talk about a string 
without embedding it as a substring. However, once you add named 
functions to your metalanguage (Haskell):


f 3 = 

Then the string f 3 is shorter than the string xxx. Suppose f 4 
= ~xxx. Now one can talk about the other with the shorter string 
on the left:   length f 3  length ~xxx but also length f 4  
length xxx. A longer string can talk about a shorter string (but 
not vice versa), so this mapping is critical to overcoming the 
limitation about talking about oneself.


After that, the rest is easy: encode the Liar's Paradox and voila. once 
we have the encoding of a language, and construct functions to encode 
proof transformations, we can reduce derivability to a simple predicate 
derivable :: String - Bool and look for some formula f such that 
derivable (f) == False and derivable(~ ++ f) == False.


Godel's First Incompleteness Theorem merely says that there is some 
integer (which mirrors some well-formed formula) not in the set of 
integers that mirror derivable theorems, making your system 
incomplete. Moreover, if you remedy this situation by adding this 
integer to the set as an axiom, you immediately can derive another 
integer which mirrors its negated formula, making your system inconsistent.


The metasystem (with symbols for derivable, transformation rules like 
modus ponens, and the like), are not part of the formal system, but in 
the metalanguage. By encoding this metalanguage as well and redoing the 
above, you get Godel's Second Incompleteness Theorem, which says that no 
system can prove its own consistency, and if an axion is added to assert 
 consistency, you can immediately also prove its negation, making it 
unsound.


Don't look for a hat trick. Encoding the meta-meta-system buys you 
nothing extra.


That's about all I can remember without hypnosis, and I'm afraid of what 
I might say under hypnosis... :)


Dan

Algebras Math wrote:
you mean the godel function (denoted by # and generally will be in type, 
#:term - number ?) is a kind of functions instead of a particular 
function? If so, what kind of property this kinds of functions have? any 
limitation? how capable they will be?


Thanks

alg

On Tue, Mar 31, 2009 at 4:01 AM, Dan Weston weston...@imageworks.com 
mailto:weston...@imageworks.com wrote:


I can't tell exactly what you're asking, but I'll give it a try! :)

primes :: [Integer]
primes = [2,3,5,7,11,13,17,19,23,29,31,undefined]

godel :: String - Integer
godel = product . zipWith (^) primes . map (toInteger . ord)

-- Here is the identity function (note that double backslash

Re: [Haskell-cafe] Re: Looking for practical examples of Zippers

2009-03-31 Thread Dan Weston


 What I've learned: Zippers are structured collections[1] with a
 focus. Through a Zipper you can O(1) change the value of the focused
 element: that's the fundamental property. In addition, you can change
 the focus through a series of moving functions.

To clarify: there is no magic that turns a data structure into O(1) 
access, just as a CPS transformation is not a magic bullet for speeding 
up programs. Only the move functions (changing focus to some subset of 
adjacent substructures) are O(1). Update functions need not be O(1). 
And amortized random access time via a zipper is never less than 
amortized random access of the optimal equivalent un-zippered data 
structure.


Zippers are most effective when a structure is accessed by some 
quasicontinuous path through it. Fortunately, this happens quite a lot, 
although laziness does obviate the need for a zipper in many of these cases.


Dan

Cristiano Paris wrote:

On Mon, Mar 30, 2009 at 9:46 PM, Gü?nther Schmidt gue.schm...@web.de wrote:

Thanks Don,

I followed some examples but have not yet seen anything that would show me
how, for instance, turn a nested Map like

Map Int (Map Int (Map String Double)

into a zipped version.

That is presuming of course that this use is feasible at all.


Hi Günther,

a couple of weeks ago I was looking into Zippers my self as well.
After reading all the documents mentioned in the other messages, I
decided to go for my implementation as the proposed ones seemed to me
unnecessarily complicated. You can see the discussion here:

http://www.haskell.org/pipermail/haskell-cafe/2009-March/056942.html

I have to thank Heinrich Apfelmus and Ryan Ingram because they pointed
out a major flaw in my implementation and so I got Zippers and why
they are implemented as such.

What I've learned: Zippers are structured collections[1] with a
focus. Through a Zipper you can O(1) change the value of the focused
element: that's the fundamental property. In addition, you can change
the focus through a series of moving functions. Regarding their
implementation, it's important to understand that the moving functions
must be aware of the changes you made to the focused element. This
is carried out by having the moving functions rebuild the context of
the new focused element starting from the current focus' context.

On the contrary, my implementation relied on laziness and partial
application but lacked the awareness of the changes. If you can
catch this difference, it's easy to grasp the Zipper/Delimited
Continuation link and the statement a zipper is a delimited
continuation reified to data.

Sorry for my explanation using elementary terms: I'm no computer
science theorist ;)

Hope this helped.

Cristiano

[1] By structured collection I mean lists, trees, graphs and so on.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] abou the Godel Numbering for untyped lambda calculus

2009-03-30 Thread Dan Weston

I can't tell exactly what you're asking, but I'll give it a try! :)

primes :: [Integer]
primes = [2,3,5,7,11,13,17,19,23,29,31,undefined]

godel :: String - Integer
godel = product . zipWith (^) primes . map (toInteger . ord)

-- Here is the identity function (note that double backslash is a single 
character

godel \\x - x
16299205143424414778516136340297046311354309801073354280170648361560910598064505161310025984520006513748905700221911285514217885596672516569597333802297968719766931760945587974894890573117639225077609114474930162613977545676952342196230560374474907135258153112532736917033464552751154867175807782380948086616267097097663289151592060026757162877597927072000376832

-- = 2^92 * 3^120 * 5^32 * 7^45 * 11^62 * 13^32 * 17^120

ungodel :: Integer - String
  -- ungodel . godel = id

(See http://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic for 
why this works).


These numbers get large quickly, but are finite for any finite string of 
finite alphabet.


godel is a total function (every string has a unique integer), and its 
inverse (an exercise to the reader involving prime factor decomposition) 
would also be total if the alphabet were infinite (so that ord would be 
total).


Frankly, I wouldn't know how to begin encoding the godel numbering in 
the type system (shudder!), but being that it is a total primitive 
recursive function, I suppose there is no theoretical impediment.


In contrast, Godel's Theorems themselves cannot be so encoded because 
the last step involves a general recursive function.


Dan

Algebras Math wrote:

Hi all,

I am reading the book The lambda calculus: Its syntax and Semantics in 
the chapter about Godel Numbering but I am confused in some points.


We know for Church Numerals, we have Cn = \fx.f^n(x) for some n=0, i.e. 
C0 = \fx.x and C1 = \fx.fx.


 From the above definition, I could guess the purpose of this kind of 
encoding is trying to encode numeral via terms.


How about the Godel Numbering? From definition we know people say #M is 
the godel number of M and we also have [M] = C#M to enjoy the second 
fixed point theorem : for all F there exists X s.t. F[X] = X.


What the mapping function # is standing for? How could I use it? What 
the #M will be? How to make use of the Godel Numbering?


Thank you very much!

alg



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about Haskell code written to be too smart

2009-03-25 Thread Dan Weston

So to be clear with the terminology:

inductive   = good consumer?
coinductive = good producer?

So fusion should be possible (automatically? or do I need a GHC rule?) with
  inductive . coinductive

Or have I bungled it?

Dan

wren ng thornton wrote:

Thomas Hartman wrote:

sorry, wrong function.

should be

partitions [] xs = []
partitions (n:parts) xs =
  let (beg,end) = splitAt n xs
  in beg : ( case end of
   [] - []
   xs - partitions parts xs)



It's not tail recursive, FWIW. The recursive expression has (:) as it's 
head before it hits `partitions`. It is however nicely coinductive, 
which has other good properties.


We could make it tail-recursive easily,

   partitions = go id
   where
   go k [] xs = k []
   go k (n:ns) xs =
   let (beg,end) = splitAt n xs
   k'= k . (beg:)
   in  case end of
   []  - k' []
   xs' - go k' ns xs'

(Note how this version has `go` as the head of the recursive expression.)

...however this version has different strictness properties. In 
particular, let both input lists be infinite (and take a finite portion 
of the result). The original version works fine because it gives a 
little bit of output (beg:) at each step of the recursion ---which is 
all coinductive means. The tail-recursive version hits _|_ however, 
because we've delayed giving any input (k []) until one of the two lists 
hits [] ---we've tried doing induction on co-data and so we hit an 
infinite loop.


This dichotomy between coinduction and tail-recursion is quite common. 
It's another example of the recently discussed problem of defining foldr 
in terms of foldl. Whether the termination differences matter depends on 
how the function is to be used.



Another nice property of coinduction is that it means we can do 
build/fold fusion easily:


   partitions = \ns xs - build (\cons nil - go cons nil ns xs)
   where
   go cons nil = go'
   where
   go' [] xs = nil
   go' (n:ns) xs =
let (beg,end) = splitAt n xs
in  beg `cons` case end of
   []  - nil
   xs' - go' ns xs'

By using the GHC.Exts.build wrapper the fusion rules will automatically 
apply. The second wrapper, go, is just so that the worker, go', doesn't 
need to pass cons and nil down through the recursion.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about Haskell code written to be too smart

2009-03-25 Thread Dan Weston


 However, there is something to be said for code that just looks like a
 duck and quacks like a duck. It's less likely to surprise you.

 So... I insist... Easy for a beginner to read == better!

All you have said is that one building a skyscraper will need 
scaffolding, blueprints, and a good building inspector. The intended 
inhabitants of that skyscraper will not want to stare out at scaffolding 
for the rest of their lives.


Put the simple version in a QuickCheck predicate. That is the ideal 
place for it.


All the better if through this process we all learn a lesson about 
strictness, laziness, and the State monad.


Dan

Thomas Hartman wrote:

Are you saying there's a problem with this implementation? It's the


Yes, there is actually a problem with this implementation.

import Data.List
import Control.Monad.State
import Debug.Trace.Helpers


partitions [] xs = []
partitions (n:parts) xs =
  let (beg,end) = splitAt n xs
  in beg : ( case end of
   [] - []
   xs - partitions parts xs)

partitionsSimpleStupidGood = partitions

partitionsTooFrickinClever = evalState . mapM (State . splitAt)

testP pf = mapM_ putStrLn  [
  show . pf [3,7..] $ [1..10]
  , show . pf [3,7,11,15] $ [1..]
  , show . head . last $ pf [3,3..] [1..10^6]
]

*Main testP partitionsSimpleStupidGood
testP partitionsSimpleStupidGood^J[[1,2,3],[4,5,6,7,8,9,10]]
[[1,2,3],[4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20,21],[22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]]
100

Now try testP partitionsTooFrickinClever

Now, I am sure there is a fix for whatever is ailing the State monad
version, and we would all learn a lesson from it about strictness,
laziness, and the State monad.

However, there is something to be said for code that just looks like a
duck and quacks like a duck. It's less likely to surprise you.

So... I insist... Easy for a beginner to read == better!


2009/3/24 Dan Piponi dpip...@gmail.com:

Miguel Mitrofanov wrote:

takeList = evalState . mapM (State . splitAt)

However, ironically, I stopped using them for pretty
much the same reason that Manlio is saying.

Are you saying there's a problem with this implementation? It's the
only one I could just read immediately. The trick is to see that
evalState and State are just noise for the type inferencer so we just
need to think about mapM splitAt. This turns a sequence of integers
into a sequence of splitAts, each one chewing on the leftovers of the
previous one. *Way* easier than both the zipWith one-liner and the
explicit version. It says exactly what it means, almost in English.
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Calculating with list comprehension

2009-03-05 Thread Dan Weston
Keep in mind this is a *lexical* rewrite. In the generator rule x and e 
are not independent: x is a pattern (which introduces a bind variable) 
and e is an expression (with free variables, one of which may be bound by x)


After one application of the generator rule, we get (using a lambda 
expression instead of introducing a fresh function name f):


concatMap (\a - [(a,b) | b - [1..2]]) [1..3]

After another:

concatMap (\a - concatMap (\b - [(a,b)]) [1..2]) [1..3]

Note that the a - and b - map into \a - and \b - and bind the 
free variables a and b in the expression (a,b).


Dan


R J wrote:
I can calculate non-nested list comprehensions without a problem, but am 
unable to calculate nested comprehensions involving, for example, the 
generation of a list of pairs where the first and separate elements are 
drawn from two separate lists, as in:


   [(a, b) | a - [1..3], b - [1..2]]

How does one calculate the expansion of this list?  The two rules for 
expanding list comprehensions are:


1.  Generator rule:  [e | x - xs, Q]  =  concat (map f xs)
  where
  f x = [e | Q]

2.  Guard rule:  [e | p, Q]=  if p then [e | Q] else []


There is a third rule that I've seen on the Internet, not in an 
authoritative text:


   [e | Q1 , Q2] =  concat [ [e | Q 2] | Q1 ]

I don't understand what this third rule means, or whether it's relevant.

Concat and map are defined as:

concat   :: [[a]] - [a]
concat []=  []
concat (xs:xss)  =  xs ++ concat xss

map  :: (a - b) - [a] - [b]
map f [] =  []
map f (x:xs) =  f x : (map f xs)

Any help is appreciated.




Windows Live™: Keep your life in sync. Check it out. 
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1b_explore_032009




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theory about uncurried functions

2009-03-03 Thread Dan Weston

Not a better programmer, just a better human being.

Peter Verswyvelen wrote:

Thank you all for this information. It was very enlightening.

Too bad I don't know category theory, since I think it would give me a
better view on the different forms and essence of computing.

Maybe this raises a new question: does understanding category theory
makes you a better *programmer*?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] morphisms in IO

2009-02-05 Thread Dan Weston
I truly have no idea what you are saying (and probably not even what I 
am saying), but I suspect:


a) You are calling IO the target category of applying the functor IO 
[taking a to IO a and (a-b) to (IO a - IO b)] to Hask.


b) This category is hardly bereft, nor discrete. Its morphisms are IO a 
- IO b. Maybe you are thinking that it becomes an empty category after 
some fixed point process where you strip off the IO with repeated joins 
(or runIOs)? Remember there can be arbitrarily many nestings of IO, so 
that main :: IO (IO (IO Int)) is (a program that when run returns)^3 an 
Int. This is a stream with no finite least fixed point.


c) What you are calling a bereft category is an empty category. 
Without (identity) morphisms, there can be no objects. There is only one 
such category (the empty category), so naturally any two such are 
isomorphic (for what it's worth, which I suspect is not much).


Dan

Gregg Reynolds wrote:

I'm working on a radically different way of looking at IO.  Before I
post it and make a fool of myself, I'd appreciate a reality check on
the following points:

a)  Can IO be thought of as a category?  I think the answer is yes.

b)  If it is a category, what are its morphisms?  I think the answer
is: it has no morphisms.  The morphisms available are natural
transformations or functors, and thus not /in/ the category.
Alternatively: we have no means of directly naming its values, so the
only way we can operate on its values is to use morphisms from the
outside (operating on construction expressions qua morphisms.)

c)  All categories with no morphisms (bereft categories?) are
isomorphic (to each other).  I think yes.

Thanks,

gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Current research on overlapping/closed type families?

2009-01-23 Thread Dan Weston

Would this then also eventually work?

data Zero
data Succ a = Succ a

type family IsFunction f

type instances
  IsFunction (a - b) = Succ (IsFunction b)
  IsFunction c= Zero


Simon Peyton-Jones wrote:

Provided all the overlapping instances are supplied together, as you suggest, I 
think what you say makes perfect sense and does not threaten soundness.

But we have not yet implemented the idea yet.  First priority is to get type 
families working properly, and in conjunction with type classes.  Then we can 
move on to adding features.

Simon

| -Original Message-
| From: haskell-cafe-boun...@haskell.org [mailto:haskell-cafe-
| boun...@haskell.org] On Behalf Of Ryan Ingram
| Sent: 19 January 2009 23:24
| To: Haskell Cafe
| Subject: [Haskell-cafe] Current research on overlapping/closed type families?
|
| What's the status of overlapping/closed type families?  I'm interested
| in something like the following, which can currently be implemented in
| GHC with Oleg-magic using functional dependencies, but cannot, to my
| knowledge, be implemented with type families:
|
| data HTrue = HTrue
| data HFalse = HFalse
|
| type family IsFunction f
|
| {- not legal in GHC6.10 -}
| type instances
|IsFunction (a - b) = HTrue
|IsFunction a = HFalse
|
|   -- ryan
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Elevator pitch for functional programming

2009-01-20 Thread Dan Weston
One of the coolest things about Haskell is the ability to refer to 
values not yet calculated, without having to work out the timing yourself.


You want Fibonacci numbers?

Prelude let z = zipWith (+) (0:1:z) (0:z) in take 10 z
[0,1,1,2,3,5,8,13,21,34]

Try doing that in one line of C++.

See also e.g.

http://sigfpe.blogspot.com/2006/12/tying-knots-generically.html

Dan

Jim Burton wrote:


Jim Burton wrote:


Adrian Neumann wrote:

There was a thread about that:

  http://www.haskell.org/pipermail/haskell-cafe/2007-September/
031402.html

Thanks! I didn't literally mean elevator pitch and if I knew that thread
existed would have phrased my post differently, because a list of the
things that are cool about Haskell will not impress them. What I want and
am finding it hard to create are examples where FP shines and, for the
same problem, imperative languages look like more work. 



Parallelism! Something based on dons' blog
http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking-4core will be a
good start.


Many will think of

programming solely in terms of developing websites, GUIs, database access,
so I will demonstrate how strongly-typed database access can help them.

Jim

[...]







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-15 Thread Dan Weston

Maybe you can explain that again?

I see how the subset of Kleisli arrows (a - m a) forms a monoid (a, 
return . id, =), but what to do with (a - m b)? (=) is not closed 
under this larger set.


Dan

Miguel Mitrofanov wrote:
Notice that monoid sounds almost *exactly* like monad. And yet,  
what you use them for is wildly unrelated.


Well, monads are monoids. I remember explaining you that...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt

2009-01-15 Thread Dan Weston
Richard Feinman once said: if someone says he understands quantum 
mechanics, he doesn't understand quantum mechanics.


But what did he know...

Luke Palmer wrote:
On Thu, Jan 15, 2009 at 7:02 PM, Michael Giagnocavo m...@giagnocavo.net 
mailto:m...@giagnocavo.net wrote:


 Your talk of undergraduate courses to concat two lists isn't grounded
 in any kind of reality, muddies the waters, and probably scares people
 away from Haskell by giving the impression that it's much harder than
 it is.

I've been studying Haskell a bit to understand and learn more about
functional programming (I'm using F#). I have to say, the scariest
thing I've faced was exactly what you say. Everything I read built
monads up to be this ungraspable thing like quantum mechanics.


Yeah, monad is on the same level as quantum mechanics.  Both are equally 
simple and popularly construed as ungraspable.


(However to grasp monads easily you need a background in FP; to grasp QM 
easily you need a background in linear algebra)


Luke




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Updating doubly linked lists

2009-01-06 Thread Dan Weston

Apfelmus,

Thanks for the reply.

From your description (without reading the code ;))

I hope the code is better than my description! :) The structure is more like

Nothing(RK 0 _)
  Nothing(RK 1 _)
A(RK 2 4)
  B(RK 3 6)
C(RK 2 0)

 The root of the tree is the center and you can descend on the right.
 But with this structure, walking from A to B is O(d) = O(n)
 (where d is the distance from the origin,
 n the side length of the grid) instead of O(1).

No. The tree is [[Node]], where the outer list has one element for each 
radius that has an occupied node and each inner list has the number of 
nodes at the given radius.


You descend the spine of the outer list radially in O(deltaR) time, 
which for incremental moves is O(1).


Then you search for an existing inner list element in O(nk(r)), which 
stays fairly constant for reasonable paths (basically, the width of a 
path swath).


 I mean, O(d) may be fine for you, but it's not O(1) for everything as
 advertised. :)

d is not the distance from the origin, it is nk(r), the number of nodes 
at a given radius: d(2) = 2, d(3) = 1.


An outward radial path will only expand the tree linearly, not 
quadratically, in size.


 Put differently, using  Data.Tree.Zipper.parent  on B will move you to
 C, not to A.

The parent of C is either A or B, depending on the path that created it, 
but parent teleports you in O(1).


Walking from A to B only involves:

(bX,bY) = (-3,0)
(aX,aY) = (-2,0)
(bR,bK) = (|bX| + |bY|, bR - bX) = (3,6) -- left halfplane
(aR,aK) = (|aX| + |aY|, aR - aX) = (2,4) -- left halfplane
deltaR  = bR - aR = 1
maybe (insertDownFirst (newNode rk) z) (moveAround rk) $ firstChild z

When firstChild fails, insertDownFirst and we're done! All operations 
are O(1).


When firstChild succeeds, moveAround queries each of the defined nodes 
-- but not any of the undefined nodes! -- at that radius. There is at 
most one defined node with Nothing value to ensure a path from the 
origin to every node (where path is not contiguous in X,Y, or K, only in R!)


The diagram you describe can be created with:

Prelude :l GridZipper
*GridZipper let f  g  = \x - (f x, g x)
*GridZipper let f  g  = g . f
*GridZipper const (newGrid :: Grid String)  fromTree
  west  west  setValue (Just A: X=-2,Y=0,R=2,K=4)
  west   setValue (Just B: X=-3,Y=0,R=3,K=6)
  east  east  east
  east  east  setValue (Just C: X= 2,Y=0,R=2,K=0)
  assocList  show  putStrLn $ ()

-- The tree is this:

[(XY (-2) 0,A: X=-2,Y=0,R=2,K=4),
 (XY (-3) 0,B: X=-3,Y=0,R=3,K=6),
 (XY   2  0,C: X= 2,Y=0,R=2,K=0)]

-- Zipper starts at origin:

Loc
 {tree = Node {rootLabel = GridLabel (RK 0 0) Nothing,
  subForest = []}, lefts = [], rights = [], parents = []}


-- Zipper after walking to A and setting value:

Loc
 {tree = Node {rootLabel = GridLabel (RK 2 4)
 (Just A: X=-2,Y=0,R=2,K=4),
  subForest = []},
 lefts   = [],
 rights  = [],
 parents = [([],GridLabel (RK 1 2) Nothing,[])
   ,([],GridLabel (RK 0 0) Nothing,[])]}

-- Zipper after walking to B and setting value:

Loc
 {tree = Node {rootLabel = GridLabel (RK 3 6)
   (Just B: X=-3,Y=0,R=3,K=6),
  subForest = []},
 lefts   = [],
 rights  = [],
 parents = [([],GridLabel (RK 2 4)
   (Just A: X=-2,Y=0,R=2,K=4),
[]),([],GridLabel (RK 1 2) Nothing,[])
   ,([],GridLabel (RK 0 0) Nothing,[])]}




-- Zipper where it left off at C:
(Loc
 {tree = Node {rootLabel = GridLabel (RK 2 0)
  (Just C: X=2,Y=0,R=2,K=0),
  subForest = []},
  lefts   = [],
  rights  = [],
  parents = [([Node {rootLabel = GridLabel (RK 1 2) Nothing,
  subForest = [Node {rootLabel = GridLabel (RK 2 4)
(Just A: X=-2,Y=0,R=2,K=4),
  subForest = [Node {rootLabel = GridLabel (RK 3 6)
(Just B: X=-3,Y=0,R=3,K=6),
  subForest = []}]}]}],  GridLabel (RK 1 0) Nothing,[]),
 ([],GridLabel (RK 0 0) Nothing,[])]},

-- Zipper at origin

Loc
 {tree  =  Node {rootLabel = GridLabel (RK 0 0) Nothing,
  subForest = [Node {rootLabel = GridLabel (RK 1 2) Nothing,
  subForest = [Node {rootLabel = GridLabel (RK 2 4)
   (Just A: X=-2,Y=0,R=2,K=4),
  subForest = [Node {rootLabel = GridLabel (RK 3 6)
   (Just B: X=-3,Y=0,R=3,K=6),
  subForest = [] } ]} ]},
   Node {rootLabel = GridLabel (RK 1 0) Nothing,
  subForest = [Node {rootLabel = GridLabel (RK 2 0)
(Just C: X=2,Y=0,R=2,K=0),
  subForest = [] }] }]},
  lefts   = [],
  rights  = [],
  parents = []})



Apfelmus, Heinrich wrote:

Dan Weston wrote:

For the 2D grid zipper above, moving around is O(1) but update is O(log
n). This is acceptable; also because I'm quite confident that a zipper
for a 2D grid with everything O(1) does not exist. I can prove that for
a special case and should probably write it down at some point.

Really? My solution (rose tree

Re: [Haskell-cafe] Infinite grid

2009-01-05 Thread Dan Weston
I think I found a solution to this, if you're still looking for one. See 
attached code. It uses a rose tree zipper where tree depth is manhattan 
distance from origin and forest width is nodes around concentric 
diamonds. The code is straightforward. Polar coords (RK) are stored in 
node label, with conversion to/from cartesian calculated on the fly (but 
may also be stored in label if speed is more important than time).


Cyclic closed loop tests like your f below run in constant space for me.

Dan Weston

Martijn van Steenbergen wrote:

Hello,

I would like to construct an infinite two-dimensional grid of nodes, 
where a node looks like this:



data Node = Node
  { north :: Node
  , east  :: Node
  , south :: Node
  , west  :: Node
  }


in such a way that for every node n in the grid it doesn't matter how I 
travel to n, I always end up in the same memory location for that node.


I suspect another way of saying that is that


let f = f . north . east . south . west in f origin


should run in constant space. I hope this makes the problem clear. :-)

How do I do this?

Thanks in advance,

Martijn.
-- |2-D infinite grid with O(1) lookup, modification, and incremental move
-- 
-- Data is saved sparsely (wrapped in Maybe) with a rose tree zipper
-- where depth is manhattan distance from origin, and sibling index is order
-- CCW around a diamond centered at origin. Sparsity maximized by storing
-- only nonempty nodes (save that every radius below max has at least one node).
--
-- Uses Data.Tree.Zipper which can be found on hackage at
--   http://hackage.haskell.org/cgi-bin/hackage-scripts/package/rosezipper
-- 
-- Data.Tree is indexed by Int. For unbounded grid, rewrite this code,
-- Data.Tree, and Data.Tree.Zipper to use Integer (should be straightforward).
--
-- Copyright (c) Dan Weston, 2008. All rights reserved.
--
-- License: Simplified BSD. See bottom of source file for details.

module GridZipper (
 -- * Grid representation
 module Data.Tree,
 module Data.Tree.Zipper,
 GridLabel(..),
 Grid,
 GridZipper,
 newGrid,
 -- * Grid coordinates
 XY(..),
 RK(..),
 getRK,getXY,
 cartesianFromPolar,polarFromCartesian,
 -- * Grid values
 getValue,newValue,setValue,
 -- * Moving around the grid
 goToRK,goToXY,moveInXY,north,south,east,west,
 -- * Example usage
 assocList,sampleUsage
 ) where


import Data.Tree.Zipper(TreeLoc,getLabel,setLabel,modifyLabel,
root,parent,left,right,firstChild,toTree,fromTree,
insertRight,insertDownFirst)
import Data.Tree   (Tree,flatten)
import Data.Maybe  (maybe,isJust,fromJust)

--
-- DATA TYPES
--

-- |Cartesian grid coordinates
data XY = XY Int Int deriving (Eq,Show)

-- |Polar grid coordinates
-- r = |x| + |y| (manhattan distance form origin)
-- k = index around diamond, starting at (+r,0)
data RK = RK Int Int deriving  (Eq,Show)

-- |Grid label
data GridLabel  a = GridLabel RK (Maybe a) deriving (Eq,Show)

-- |Grid represented as rose tree (radius = depth, angle = width)
type Grid   a = Tree(GridLabel a)

-- |Cursor is rose tree zipper (polar coords stored in label alongside value)
type GridZipper a = TreeLoc (GridLabel a)


--
-- COORDINATE CONVERSION
--

-- |Gets cartesian coordinates from polar ones
cartesianFromPolar :: RK - XY
cartesianFromPolar (RK 0 0) = XY 0 0
cartesianFromPolar (RK r k) = case q of
  0 - XY (r - s   ) (s   )
  1 - XY (negate s) (r - s   )
  2 - XY (s - r   ) (negate s)
  3 - XY (s   ) (s - r   )
  where (q,s) = k `divMod` r

-- |Gets polar coordinates from cartesian ones
polarFromCartesian :: XY - RK
polarFromCartesian (XY 0 0) = RK 0 0
polarFromCartesian (XY x y)
  | x  0  y = 0 = RK ry
  | y  0  x = 0 = RK r (r   - x)
  | x  0  y = 0 = RK r (2*r - y)
  | y  0  x = 0 = RK r (3*r + x)
  where r  = abs x + abs y

--
-- COORDINATE ACCESS
--

-- |Extracts polar coordinates from label
getRK :: GridLabel a - RK
getRK (GridLabel rk _) = rk

-- |Extracts cartesian coordinates from label
getXY :: GridLabel a - XY
getXY = cartesianFromPolar . getRK

--
-- VALUE ACCESS AND MODIFY
--

-- |Extracts grid value, if any, from label
getValue :: GridLabel a - Maybe a
getValue (GridLabel _ value) = value

-- |Returns copy, replacing grid value
newValue :: Maybe a - GridLabel a - GridLabel a
newValue v (GridLabel rk _) = GridLabel rk v

-- |Returns copy, replacing grid value

Re: [Haskell-cafe] Re: Updating doubly linked lists

2009-01-05 Thread Dan Weston

 For the 2D grid zipper above, moving around is O(1) but update is O(log
 n). This is acceptable; also because I'm quite confident that a zipper
 for a 2D grid with everything O(1) does not exist. I can prove that for
 a special case and should probably write it down at some point.

Really? My solution (rose tree zipper where tree depth is manhattan 
distance from origin and forest width is nodes around concentric 
diamonds, see 
http://permalink.gmane.org/gmane.comp.lang.haskell.cafe/49948) was 
designed specifically to be amortized constant for everything for paths 
that do not specifically move helically around the origin. The 
complexity of lookup is O(d) where d is the number of defined nodes at a 
given radius. Until the grid gets pretty dense, d grows very slowly for 
most sane paths.


Have I missed something?

Dan

Apfelmus, Heinrich wrote:

S. Günther wrote:

What kind of structure do you need exactly?
What I really need is a structure which represents a two dimensional 
grid, i.e. it consists of nodes having a value and a list of

neighbours attached to it. Point is that if node 1 has node 2 as a
neighbour then node 2 has to have node 1 as a
neighbour and each node has the same number of neighbours (currently
4, but may vary).


Ah, so you have a rectangular (or later hexagonal?) 2D grid? I suggest
representing it as

  Data.Map (Integer, Integer) a

as explained below.


That's why I said that thinking about the circular case was just a
divergence that really got me wondering/interested which is why I posted
the question in it's short form at the beginning.


Exploring a related easier problem is always a good way to get some
intuition for tackling the harder one. :)


Anyways, back to the structure I need. One additional thing which will
happen during the algorithm is that there will be updates to a certain
local neighboruhood of a node. Now I understand, that that might very
well be possible with zippers.

Instead of lists of neighbouring nodes I might as well save the paths
through the graphs separately from the nodes although I only have a very
vague intuition about how that would look like. And instead of calculating
a lists of nodes to update, I could calculate a path visting the
nodes and update them (again beeing unable to escape from the prison
of an imperative mindset) traversing the path.


A zipper indeed allows you to move to neighbors and update them.


But the point was that I just had a hard time generalizing what I read
about zippers to structures where you can have embedded cycles, e.g.

up . left . down . right = id.


If you interpret zippers as the original data structure with a hole,
this is not so difficult. For instance, consider a rectangular grid

  +--+--+--+--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--+--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--+--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--+--+--+--+--+

where you store some data at every node. Now, a zipper is just the old
data structure but one node is marked as hole

  +--+--+--+--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--O--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--+--+--+--+--+
  |  |  |  |  |  |  |  |
  +--+--+--+--+--+--+--+

If you represent the grid as a rectangular table

  type Position = (Integer, Integer)
  type Grid a   = Data.Map Position a

a zipper is simply the grid with an extra marker

  type Zipper a = (Grid a, Position)

  left,right,up,down :: Zipper a - Zipper a
  left  (g,(x,y)) = (g,(x-1,y))
  right (g,(x,y)) = (g,(x+1,y))
  up(g,(x,y)) = (g,(x,y-1))
  down  (g,(x,y)) = (g,(x,y+1))

  update :: a - Zipper a - Zipper a
  update a (g,(x,y)) = (insert (x,y) a g, (x,y))

Note that the  left, right  etc. are not baked into the data type but
implemented as normal functions.

In principle, the same could be done for lists

  type ZipperL a = ([a], Int)

  left, right :: ZipperL a - ZipperL a
  left  (xs,i) = (xs,i-1)
  right (xs,i) = (xs,i+1)

  update :: a - ZipperL a - ZipperL a
  update a (xs,i) = (take i xs ++ [a] ++ drop (i+1) xs, i)

This is a valid implementation of a zipper for lists, but of course is
very inefficient,  update  is O(n) . The key thing about the original
list zipper with back and front list is that all operations are O(1).

For the 2D grid zipper above, moving around is O(1) but update is O(log
n). This is acceptable; also because I'm quite confident that a zipper
for a 2D grid with everything O(1) does not exist. I can prove that for
a special case and should probably write it down at some point.

In other words, I suggest representing your grid as a

   Data.Map (Integer,Integer) a

and accept the minor inconvenience of a O(log n) update. Choosing a
different finite map implementation may give a better constant factor.
For instance you can nest two  Data.IntMap  etc.


Righty right, but there's still the possibility that given all the 
time in the world and the clearest explanations I'm just to dense to

get a hold of it. That said I hope 

Re: [Haskell-cafe] Infinite grid

2008-12-29 Thread Dan Weston
I'm confused how this isn't just tantamount to using Data.Map 
(Integer,Integer) a.


The essential problem is that you have an algebra acting on a topology. 
The algebra is easily rewritten to an efficient form, but a sequence of 
topological actions is not, because it is not sufficiently forgetful of 
path, due to the inherent inability of F-algebras (or maybe lack of 
cleverness on our part?) to create a data structure that allows a 2D 
zipper without duplication.


Unlike a 1D zipper, there always exists a path from an updated element 
to every other element that doesn't cross the zipped path, so everything 
becomes tainted (like pulling on the thread of an unhemmed garment).


In other words, a 1D space can be encoded as a binary tree or two lists, 
giving either global O(log n) or local O(1) access, respectively. A 2D 
space can also be encoded as a quadtree giving global O(log n) access, 
but I can't imagine a zipper structure that efficiently can cut and 
splice in constant time as it moves through.


Or have I (once again) completely missed the obvious?

Dan

David Menendez wrote:

On Mon, Dec 29, 2008 at 5:55 PM, Martijn van Steenbergen
mart...@van.steenbergen.nl wrote:

Hello,

I would like to construct an infinite two-dimensional grid of nodes, where a
node looks like this:


data Node = Node
 { north :: Node
 , east  :: Node
 , south :: Node
 , west  :: Node
 }

in such a way that for every node n in the grid it doesn't matter how I
travel to n, I always end up in the same memory location for that node.


I'm curious to know what you would do with it, actually. Once you have
such a structure, you can't really modify it because that would
require copying the entire thing, and you can't fill it with IORefs,
as ChrisK suggested, because you would need an infinite supply of
them.

Here's my own solution, which is longer than Luke Palmer's but doesn't
rely on an intermediate data structure.


data Node a = Node
{ contents :: a
, north:: Node a
, south:: Node a
, east :: Node a
, west :: Node a
}

grid :: (Integer - Integer - a) - Node a
grid f = origin
where
origin = Node
{ contents = f 0 0
, north= growNorth 0 1 origin
, south= growSouth 0 (-1) origin
, east = growEast 1 origin
, west = growWest (-1) origin
}

growEast x w = self
where
self = Node
{ contents = f x 0
, north= growNorth x 1 self
, south= growSouth x (-1) self
, east = growEast (x+1) self
, west = w
}

growWest x e = self
where
self = Node
{ contents = f x 0
, north= growNorth x 1 self
, south= growSouth x (-1) self
, east = e
, west = growWest (x+1) self
}

growNorth x y s = self
where
self = Node
{ contents = f x y
, north= growNorth x (y-1) self
, south= s
, east = north (east s)
, west = north (west s)
}

growSouth x y n = self
where
self = Node
{ contents = f x y
, north= n
, south= growSouth x (y-1) self
, east = south (east n)
, west = south (west n)
}

--

*Main Debug.Trace let o = grid (\x y - trace (compute  ++ show (x,y)) (x,y))
*Main Debug.Trace contents o
compute (0,0)
(0,0)
*Main Debug.Trace contents . west . south . east . north $ o
(0,0)
*Main Debug.Trace contents . east . north $ o
compute (1,1)
(1,1)
*Main Debug.Trace contents . north . east $ o
(1,1)




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rewrite thunk action rule?

2008-12-22 Thread Dan Weston

Peter Todd wrote:

Not quite. If I have a thunk, at the low level somewhere it must refer
to the transform function, the transform matrix, and the element that is
to be transformed. If I apply another transform to that unevaluated
thunk, my understanding is that haskell will represent it as such:

thunk transform Ta (thunk transform Tb e)

When I want the following:

thunk transform (Ta * Tb) e


Is this an example of a Thunktor?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Numerics implementing different instances of the same class

2008-12-12 Thread Dan Weston

What about something like

data AddMult a b = AddMult a b

class Monoid a where
  operation :: a - a - a
  identity  :: a

instance (Monoid a, Monoid b) = Monoid (AddMult a b) where
  operation  (AddMult a1 m1)
 (AddMult a2 m2)
   =  AddMult (operation a1 a2)
  (operation m1 m2)
  identity =  AddMult identity identity

class Commutative a where
  -- Nothing, this is a programmer proof obligation

class Monoid a = Group a where
  inverse :: a - a

class (Commutative a, Group a) = AbelianGroup a where

class (AbelianGroup a, AbelianGroup b) = Field a b where

instance AbelianGroup a = Field a a where


George Pollard wrote:

Is there a good way of doing this? My running example is Monoid:


class Monoid a where
operation :: a - a - a
identity :: a


With the obvious examples on Num:


instance (Num a) = Monoid a where
operation = (+)
identity = 1

instance (Num a) = Monoid a where
operation = (*)
identity = 0


Of course, this won't work. I could introduce a newtype wrapper:


newtype (Num a) = MulNum a = MulNum a
newtype (Num a) = AddNum a = AddNum a

instance (Num a) = Monoid (MulNum a) where
operation (MulNum x) (MulNum y) = MulNum (x * y)
identity = MulNum 1

instance (Num a) = Monoid (AddNum a) where ... -- etc


However, when it comes to defining (e.g.) a Field class you have two
Abelian groups over the same type, which won't work straight off:


class Field a where ...
instance (AbelianGroup a, AbelianGroup a) = Field a where ...


Could try using the newtypes again:

instance (AbelianGroup (x a), AbelianGroup (y a) = Field a where ...


... but this requires undecidable instances. I'm not even sure if it
will do what I want. (For one thing it would also require an indication
of which group distributes over the other, and this may restore
decidability.)

I'm beginning to think that the best way to do things would be to drop
the newtype wrappers and include instead an additional parameter of a
type-level Nat to allow multiple definitions per type. Is this a good
way to do things?

Has anyone else done something similar? I've taken a look at the Numeric
Prelude but it seems to be doing things a bit differently. (e.g. there
aren't constraints on Ring that require Monoid, etc)

- George




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Animated line art

2008-12-04 Thread Dan Weston

Andrew,

I can think of several reasons why simple time-indexed animation may be 
a bad idea. Some important aspects of animation are usually:


1) A main use case is playback, where time change is continuous and 
monotonic.


2) Differential action is often much cheaper than time jumping (i.e. 
between adjacent time steps most lines do not move and do not need to be 
recomputed. It is cheaper to modify the previous animation.)


3) Time is a topological space, with lots of nice properties: change of 
time is a group, intervals are decomposable, etc. Animation inherits 
this structure as a G-torsor (e.g. between two close enough times the 
line art should look very similar, there is no canonical start time). 
You are throwing this all away if you just treat line art like a point set.


4) Although your Time set may be discrete, you may want to pretend it is 
smooth to facilitate e.g. motion blur and simulation, which strongly 
favor a differential approach. There is often a need for run-up or 
adaptive time interval subdivision. Clip start and stop times are often 
changed interactively.


Instead of defining a calculus (line art indexed by time), you may want 
to define an algebra (action of time change on line art). You can 
manipulate the algebra independently for efficiency (e.g. combine 
adjacent time intervals into one big interval if pointwise evaluation is 
cheap, or subdivide an interval if differential change is cheap) and 
then apply the final result to line art defined at some time origin.


Take a quick read of http://en.wikipedia.org/wiki/Principal_bundle where 
the (group) G is time change and the (fiber bundle) P is the line art.


At minimum, implement your time argument as a start time + delta time, 
and consider a State monad (or StateT monad transformer) to hold future 
optimizations like cached (time,art) pairs in case you change your mind.


Dan

Tim Docker wrote:

It seems that the correct course of action is to design a DSL for

declaratively describing animated line art. Does anybody have ideas
about what such a thing might look like?

Someone else already mentioned FRAN and it's ilk. But perhaps you don't
need something that fancy. If you implement your drawing logic as a
function from time to the appropriate render actions, ie

| import qualified Graphics.Rendering.Cairo as C
| 
| type Animation = Time - C.Render ()


then you just need to call this function multiple times to generate
sucessive frames.

Tim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bit Field Marshalling

2008-11-07 Thread Dan Weston

 C standard allows padding and reorder of struct entries

Almost. The ISO C standard does allow structs padding, but *not* reordering:

http://www.open-std.org/JTC1/SC22/wg14/www/docs/n1124.pdf
ISO/IEC 9899:1999 C Standard §6.7.2.1.13

Within a structure object, the non-bit-field members and the units in 
which bit-fields reside have addresses that increase in the order in 
which they are declared. A pointer to a structure object, suitably 
converted, points to its initial member (or if that member is a 
bit-field, then to the unit in which it resides), and vice versa. There 
may be unnamed padding within a structure object, but not at its beginning.


Dan

Lutz Donnerhacke wrote:

* Michael D. Adams wrote:

But as far as I can tell, hsc2hs doesn't support bit
fields.  On top of that I'm not sure I can make any safe assumptions
about what order the bit fields are packed (LSB or MSB first).


C standard allows padding and reorder of struct entries in order to match
alignment requirements. The only exeption are bitfields, which must not
reordered and padded. This way bit fields are the only portable way to
define the binary representation in C. Unfortunly the C standard does not
specify any bit order for bit fields, but almost all implementations use
the machine specific bit order, in order to ease access to multiple bits
wide bit field and fill LSB to MSB. But there is no guarantee.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A heretic question

2008-10-23 Thread Dan Weston
For the record, C++ (and a crippled scripting language call MEL that 
makes C look good) were used in the Maya 3D graphics software used for 
the Lord of the Rings movies [1]:


Weta Digital utilized Maya® as the core 3D animation software 
technology throughout the process of creating digital characters and 
effects for the Lord of the Rings™ films -- Lord of the Rings: The 
Fellowship of the Ring ™, Lord of the Rings: The Two Towers™, and Lord 
of the Rings: The Return of the King™.


[1] 
http://usa.autodesk.com/adsk/servlet/item?siteID=123112id=6878908linkID=7679654


Maya provided at that time (as now) a C++ API for plugins, with a 
data-structure poor MEL scripting language. Now (but not at that time), 
Python can also be used for both scripting and plugins.


There is (as yet) no Haskell API (anyone up for writing one?). Sorry to 
burst y'alls delusions of grandeur. I love Haskell greatly over C++, but 
the claims I've been reading about its use in industry are a still a wee 
bit premature.


Dan


Luke Palmer wrote:

On Thu, Oct 23, 2008 at 11:21 AM, Albert Y. C. Lai [EMAIL PROTECTED] wrote:

Benjamin L.Russell wrote:

C++ : paintbrush :: Haskell : ?

C++ : paintbrush :: Haskell : gimp or photoshop ?
[...]
C++ : paintbrush :: Haskell : OpenGL ?
[...]
C++ : paintbrush :: Haskell : graphics software used for the Lord of the
Rings movies?


Nah, I'd say it's:

 C++ : paintbrush :: Haskell : category theory

:-)

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Very silly

2008-10-16 Thread Dan Weston
Not that I want or need to defend C++ on this list, but 
reference-counted smart pointers (e.g. boost::shared_ptr), embedded 
inside copy-on-write proxy classes, largely simulates eager garbage 
collection. Targeted overriding of the new operator can make this lazier 
for efficiency.


In other words, there are well established idioms in most successful 
languages to solve obvious problems. Haskell's strengths are more in 
making these idioms simple and robust enough for the average user, not 
just the guru, and to make difficult or impossible the misuse of unsafe 
idioms.


Dan

Don Stewart wrote:

asandroq:

Hallo,

Andrew Coppin wrote:

C++ has some interesting ideas. I haven't learned how to use templates
yet, but what I do find interesting is that there is no automatic memory
management, and yet you can still do fairly dynamic programming. I've
never seen any other language that allows this. (I had assumed it's
impossible...) This makes me wonder just now necessary GC really is, and
whether there is some way to avoid it...


 Garbage collection was invented by Lisp implementors because of a
common pattern in functional languages: The sharing of parts of
structures, like lists. In an imperative world this is straightforward,
one allocates a linked list, uses it, and then releases the memory. In a


This is why memory management is a notoriously trivial problem in C++ ;)

-- Don (who thinks that complicated access patterns aren't unique to FP)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What I wish someone had told me...

2008-10-15 Thread Dan Weston

I suspect that more has been done since 1997. Isn't that pre-Oleg?

Karl Mazurak wrote:

Yitzchak Gale wrote:

Derek Elkins wrote:

In general, to encode OO...
turns out all you needed was recursive bounded
existential quantification.


Do you have a reference for that?


I'm not sure if this is precisely what Derek had in mind, but Bruce, 
Cardelli, and Pierce did a comparison of various object encodings:


http://www.cis.upenn.edu/~bcpierce/papers/compobj.ps

It's been a while since I read that paper, but skipping to the end tells 
me that the approach with recursive types and bounded existentials was 
the only one to support method override, although it was less attractive 
on other fronts.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] List as input

2008-10-15 Thread Dan Weston

Google median order statistic.

E.g. this is an interesting (and colorful) discussion:

http://ocw.mit.edu/NR/rdonlyres/Electrical-Engineering-and-Computer-Science/6-046JFall-2005/60D030CD-081D-4192-9FB5-C220116E280D/0/lec6.pdf

Toby Hutton wrote:

On Wed, Oct 15, 2008 at 5:44 PM, leledumbo [EMAIL PROTECTED] wrote:

module Main where

import Data.List

-- quicksort of any list
qsort [] = []
qsort (x:xs) = qsort(filter(x) xs) ++ [x] ++ qsort(filter(=x) xs)

-- optimized quicksort, uses middle element as pivot
qsortOpt [] = []
qsortOpt x  = qsortOpt less ++ [pivot] ++ qsortOpt greater
 where
   pivot = x !! ((length x) `div` 2)
   less = filter (pivot) (delete pivot x)
   greater = filter (=pivot) (delete pivot x)

main = do
 putStr Enter a list: 
 l - readLn
 print (qsortOpt l)
-- end of code


I'm curious as to why taking the pivot from the middle is an
'optimized' version.  For this to be true you must be making some
assumptions about the contents of the list.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The container problem

2008-09-26 Thread Dan Weston
More specifically, although a set is a perfectly good (lowercase) 
functor, Set is not a (Haskell) Functor.


Set's map has an Ord constraint, but the Functor type constructor is 
parametric over *all* types, not just that proper subset of them that 
have a total ordering.


But see attempts to fix this:

http://okmij.org/ftp/Haskell/types.html#restricted-datatypes
http://www.randomhacks.net/articles/2007/03/15/data-set-monad-haskell-macros

Dan

Jonathan Cast wrote:

On Fri, 2008-09-26 at 15:25 -0700, Jason Dusek wrote:

Can someone explain, why is it that Set can not be a Monad?


It can't even be a functor (which all monads are).  You can't implement

fmap (+) $ Set.fromList [1, 2, 3]

with Data.Set, because you can't order functions of type Integer -
Integer in a non-arbitrary way.  So you can't have a balanced binary
tree of them in a non-arbitrary way, either.  Something like

fmap putStrLn $ Set.fromList [Hello, world]

is similar.

Since Data.Set is implemented in Haskell, it can only use facilities
available to Haskell libraries.  So it can't work for arbitrary
elements; but a Functor instance requires that it does work.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there already an abstraction for this?

2008-09-22 Thread Dan Weston

 I can implement these with some 'sugar' as:

 identity (Sum (Lit 0) a)= a
 identity (Sum a (Lit 0))= a
 identity (Difference a (Lit 0)) = a
 identity (Product a (Lit 1))= a
 identity (Product (Lit 1) a)= a
 identity (Quotient a (Lit 1))   = a
 identity a  = a

Why do you need mutual recursion? What's wrong with:

 identity (Sum (Lit 0) a)= identity a
  ...
 identity (Quotient a (Lit 1))   = identity a
 identity a  = a

Structural recursion ensures that this always terminates.

Dan

Jeremy Shaw wrote:

Hello,

I am trying to figure out if there is an existing abstraction I am
missing here.

I have an expression data-type:

data Expr 
   = Quotient Expr Expr

   | Product Expr Expr
   | Sum Expr Expr
   | Difference Expr Expr
   | Lit Double
   | Var Char
 deriving (Eq, Ord, Data, Typeable, Read, Show)



And I want to write a function that will take an expression and
automatically apply the identity laws to simplify the expression.

The basic identity laws are:

 a + 0 = a
 a * 1 = a

I can implement these with some 'sugar' as:


identity (Sum (Lit 0) a)= a
identity (Sum a (Lit 0))= a
identity (Difference a (Lit 0)) = a
identity (Product a (Lit 1))= a
identity (Product (Lit 1) a)= a
identity (Quotient a (Lit 1))   = a
identity a  = a


This works fine when the identity only needs to be applied to the root
of the expression tree:

*Main ppExpr $ identity (expr 1 + 0)
1

But for more complicated trees it does not fully apply the identity laws:

*Main ppExpr $ identity (expr 0 + (0 + 0) + (0 + 0))
((0 + 0) + (0 + 0))

What we need to do is first apply the identity function to the
children, and then apply them to the parent of the updated children. A
first attempt would be to extend the identity function like this:


identity (Sum a b)  = identity (Sum (identity a) (identity b))


However, that will not terminate because that same case will keep
matching over and over. Another approach is to have two mutually
recursive functions like:


identity' (Sum (Lit 0) a)= identityRec a
identity' (Sum a (Lit 0))= identityRec a
identity' a = a



identityRec (Sum a b) = identity' (Sum (identity' a) (identity' b))


This prevents non-termination, but you have to be careful about
calling identity' vs identityRec or you will either re-introduce
non-termination, or you may not fully apply the identity laws.

Another option to create a helper function like:


-- |Deep map of an expression.
eMap :: (Expr - Expr) - Expr - Expr
eMap f (Sum a b) = f (Sum (eMap f a) (eMap f b))
eMap f (Difference a b) = f (Difference (eMap f a) (eMap f b))
eMap f (Product a b) = f (Product (eMap f a) (eMap f b))
eMap f (Quotient a b) = f (Quotient (eMap f a) (eMap f b))
eMap f (Var v) = f (Var v)
eMap f (Lit i) = f (Lit i)


Now we can easily apply the identity law recursively like:


deepIdentity :: Expr - Expr
deepIdentity = eMap identity


*Main ppExpr (deepIdentity (expr 0 + (0 + 0) + (0 + 0)))
0

Sweet!

But, having to write eMap out by hand like that somehow feels wrong --
as if I am missing some additional abstraction. In some respects eMap
is like a Functor, but not quite. I expect there is a way to implement
eMap using Data.Generics, which is perhaps better, but I still feel
like that is missing something?

Anyway, I just thought I would ask in case someone recognized this
pattern and could point me in the right direction. I have attached a
working example program you can play with.

I would also be interested in alternative approaches besides the ones
I outlined.

thanks!
j.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there already an abstraction for this?

2008-09-22 Thread Dan Weston
Oops, never mind. This is just the shallow application you referred to. 
Too fast with that send button!


Dan Weston wrote:


  I can implement these with some 'sugar' as:
 
  identity (Sum (Lit 0) a)= a
  identity (Sum a (Lit 0))= a
  identity (Difference a (Lit 0)) = a
  identity (Product a (Lit 1))= a
  identity (Product (Lit 1) a)= a
  identity (Quotient a (Lit 1))   = a
  identity a  = a

Why do you need mutual recursion? What's wrong with:

 identity (Sum (Lit 0) a)= identity a
  ...
 identity (Quotient a (Lit 1))   = identity a
 identity a  = a

Structural recursion ensures that this always terminates.

Dan

Jeremy Shaw wrote:

Hello,

I am trying to figure out if there is an existing abstraction I am
missing here.

I have an expression data-type:


data Expr= Quotient Expr Expr
   | Product Expr Expr
   | Sum Expr Expr
   | Difference Expr Expr
   | Lit Double
   | Var Char
 deriving (Eq, Ord, Data, Typeable, Read, Show)



And I want to write a function that will take an expression and
automatically apply the identity laws to simplify the expression.

The basic identity laws are:

 a + 0 = a
 a * 1 = a

I can implement these with some 'sugar' as:


identity (Sum (Lit 0) a)= a
identity (Sum a (Lit 0))= a
identity (Difference a (Lit 0)) = a
identity (Product a (Lit 1))= a
identity (Product (Lit 1) a)= a
identity (Quotient a (Lit 1))   = a
identity a  = a


This works fine when the identity only needs to be applied to the root
of the expression tree:

*Main ppExpr $ identity (expr 1 + 0)
1

But for more complicated trees it does not fully apply the identity laws:

*Main ppExpr $ identity (expr 0 + (0 + 0) + (0 + 0))
((0 + 0) + (0 + 0))

What we need to do is first apply the identity function to the
children, and then apply them to the parent of the updated children. A
first attempt would be to extend the identity function like this:

identity (Sum a b)  = identity (Sum (identity a) 
(identity b))


However, that will not terminate because that same case will keep
matching over and over. Another approach is to have two mutually
recursive functions like:


identity' (Sum (Lit 0) a)= identityRec a
identity' (Sum a (Lit 0))= identityRec a
identity' a = a



identityRec (Sum a b) = identity' (Sum (identity' a) (identity' b))


This prevents non-termination, but you have to be careful about
calling identity' vs identityRec or you will either re-introduce
non-termination, or you may not fully apply the identity laws.

Another option to create a helper function like:


-- |Deep map of an expression.
eMap :: (Expr - Expr) - Expr - Expr
eMap f (Sum a b) = f (Sum (eMap f a) (eMap f b))
eMap f (Difference a b) = f (Difference (eMap f a) (eMap f b))
eMap f (Product a b) = f (Product (eMap f a) (eMap f b))
eMap f (Quotient a b) = f (Quotient (eMap f a) (eMap f b))
eMap f (Var v) = f (Var v)
eMap f (Lit i) = f (Lit i)


Now we can easily apply the identity law recursively like:


deepIdentity :: Expr - Expr
deepIdentity = eMap identity


*Main ppExpr (deepIdentity (expr 0 + (0 + 0) + (0 + 0)))
0

Sweet!

But, having to write eMap out by hand like that somehow feels wrong --
as if I am missing some additional abstraction. In some respects eMap
is like a Functor, but not quite. I expect there is a way to implement
eMap using Data.Generics, which is perhaps better, but I still feel
like that is missing something?

Anyway, I just thought I would ask in case someone recognized this
pattern and could point me in the right direction. I have attached a
working example program you can play with.

I would also be interested in alternative approaches besides the ones
I outlined.

thanks!
j.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Comparing GADTs for Eq and Ord

2008-09-15 Thread Dan Weston

Take a look at

http://www.haskell.org/haskellwiki/GHC/AdvancedOverlap

Tom Hawkins wrote:

On Mon, Sep 15, 2008 at 3:11 PM, apfelmus [EMAIL PROTECTED] wrote:

So, in other words, in order to test whether terms constructed with  Equal  are
equal, you have to compare two terms of different type for equality. Well,
nothing easier than that:

   (===) :: Expr a - Expr b - Bool
   Const   === Const = True
   (Equal a b) === (Equal a' b') = a === a'  b === b'
   _   === _ = False

   instance Eq (Expr a) where
   (==) = (===)


OK.  But let's modify Expr so that Const carries values of different types:

data Expr :: * - * where
  Const :: a - Term a
  Equal :: Term a - Term a - Term Bool

How would you then define:

Const a === Const b  = ...


-Tom
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-29 Thread Dan Weston
C++ faced this very issue by saying that with global data, uniqueness of 
initialization is guaranteed but order of evaluation is not. Assuming 
that the global data are merely thunk wrappers over some common data 
source, this means that at minimum, there can be no data dependencies 
between plugins where the order of evaluation matters.


Another C++ comparison is with a virtual base class, where A::B::D and 
A::C::D are supposed to be equal, irrespective of whether it was B or C 
that first called the constructor. In this case, some witness (a vtable) 
is necessary to ensure that this happens correctly.


Dan

Ganesh Sittampalam wrote:

On Thu, 28 Aug 2008, Adrian Hey wrote:


Ganesh Sittampalam wrote:

On Thu, 28 Aug 2008, Adrian Hey wrote:


There's no semantic difficulty with the proposed language extension,


How does it behave in the presence of dynamic loading?


To answer this you need to be precise about the semantics of what
is being dynamically loaded. But this is too complex an issue
for me to get in to right now.


If you want to standardise a language feature, you have to explain its 
behaviour properly. This is one part of the necessary explanation.


To be concrete about scenarios I was considering, what happens if:

 - the same process loads two copies of the GHC RTS as part of two 
completely independent libraries? For added complications, imagine that 
one of the libraries uses a different implementation instead (e.g. Hugs)


 - one Haskell program loads several different plugins in a way that 
allows Haskell values to pass across the plugin boundary


How do these scenarios work with use cases for - like (a) Data.Unique 
and (b) preventing multiple instantiation of a sub-library?


Actually as far as things like hs-plugins are concerned I'd alway 
meant one day what exactly a plugin is, semantically. But as I've 
never had cause to use them so never got round to it. Like is it a 
value, or does it have state and identity or what?


Personally I think of them as values. I'm not sure what your questions 
about state and identity mean. If you don't have global variables, then 
state doesn't matter.



What about remote procedure calls?


Dunno, what problem do you anticipate?


Will Data.Unique still work properly if a value is sent across a RPC 
interface?



Also what if I want a thread-local variable?


Well actually I would say that threads are bad concurrency model so
I'm not keen on thread local state at all. Mainly because I'd like to
get rid of threads, but also a few other doubts even if we keep
threads.


Even if you don't like them, people still use them.

(I.E. Just making existing practice *safe*, at least in the sense that 
the compiler ain't gonna fcuk it up with INLINING or CSE and every one 
understands what is and isn't safe in ACIO)


Creating new language features means defining their semantics rather 
more clearly than just no inlining or cse, IMO.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-29 Thread Dan Weston
I actually was more interested in the problems with the obvious fix 
for this, namely the construct on first use idiom:


int A(int a) { static int aa = a; return aa; }
int B()  { return A(3); }
int C()  { return A(7); }
int D()  { if (today() == Tuesday) B(); else C(); return a(0); }

What is the value of D? Notice that this is never a problem with pure 
functions. The problem is that today() makes this an IO monad, and the 
swearing starts again.


Dan

Bryan O'Sullivan wrote:

On Fri, Aug 29, 2008 at 4:33 PM, Dan Weston [EMAIL PROTECTED] wrote:

C++ faced this very issue by saying that with global data, uniqueness of
initialization is guaranteed but order of evaluation is not.


In C++ circles, this is referred to as the static initialization
order fiasco, and it is a frequent cause of crashes that are very
difficult to debug.

http://www.parashift.com/c++-faq-lite/ctors.html#faq-10.12

I think it would be fair to say that C++ pushed this problem off to
every user of the language. I haven't seen a coherent description of
what the semantics of top-level - should be, but avoidance of
widespread swearing would be at the top of my list of requirements.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Propeganda

2008-08-27 Thread Dan Weston

Tim Docker wrote:
 
David Roundy wrote:


Which illustrates the point that it's not type safety 
that protects us from segfaults, so much as bounds checking,

and that's got a non-trivial runtime cost.  At least, most
segfaults that *I've* caused (in C or C++) have been from
overwriting the bounds of arrays, and that's precisely the problem
that Haskell does *not* solve using its type system. 


That differs from my experience. Most segfaults that *I've* caused (in
C or C++) have been due to dereferencing null pointers. Type safety
does help you here, in that Maybe lets you distinguish the types of
things that are optionally present from those that must be. 


Tim


Huh? Type safety buys you not having to worry about dereferencing stale 
nonnull pointers (lifetime of reference exceeding lifetime of referent), 
but nothing about dereferencing null pointers, which are the moral 
equivalent of Nothing.


Failure to handle a null pointer is just like using fromJust and results 
in the same program termination (undefined).


Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Last Call for the Italian Haskellers Summer Meeting

2008-08-07 Thread Dan Weston
Shouldn't that be posta elettronica (or posteletta along the lines 
of the Frence courriel)? Somehow I doubt that Dante would have 
approved of the word email.


Titto Assini wrote:

As usual we will now switch to Dante's bella lingua.


Ottimissimi,

mancano pochi giorni al primo incontro estivo/balneare degli haskeller italiani.

Per informazioni e registrarsi date uno sguardo a:
http://www.haskell.org/haskellwiki/ItaloHaskell/Summer_2008

Oppure contattate il sottoscritto via email o per telefono al 0584 791669.

A presto ragazzi,

   titto




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] BLAS Solve Example

2008-07-23 Thread Dan Weston
A jedi master might stick with the existing double precision solver, 
then convert the results to best rational approximation [1], then do a 
forward solve on the rational versions of matrices, adjusting numerator 
and denominator to eliminate the residual error (with a heuristic to 
favor common factors). If you are very lucky, such a rational number 
will exist, depending on your limits of humongous.


[1] e.g. http://www.dtashley.com/howtos/2007/01/best_rational_approximation/

Darrin Thompson wrote:

On Wed, Jul 23, 2008 at 2:12 AM, Alberto Ruiz [EMAIL PROTECTED] wrote:

$ ghci solve.hs
*Main sol
3 | [-5.511e-2,0.3,0.2776]



I was hoping for rational solutions. If I were a true jedi master I'd
write my own solver, which might be the right thing to do. All I know
so far is gauss' method. Probably I'd learn something implementing the
back substitution. hmm

Thanks.

--
Darrin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] beginners mailing list should be beginner's choice

2008-07-21 Thread Dan Weston

Just to avoid any misunderstanding...

I am certain that C.M. Brown meant to say CC'ed the Haskell-beginners 
mailing list instead of moved, but I think it's worth emphasizing 
that the new beginners list was ostensibly created for various discussed 
reasons, but all to provide a more tailored forum for beginners, not to 
restrict participation on haskell-cafe. Words like move could sound to 
a beginner like a dismissal or demotion.


(This policy is clearly different from the haskell list, which has a 
much stronger collegially-enforced moderation policy limited to 
announcements.)


I would hate to think that people on the beginners list might worry that 
their questions were not good enough to join the grown-ups on 
haskell-cafe. I think CC'ing to beginners is hint enough, and soon 
enough people will choose the best forum for their comfort level.


Dan

C.M.Brown wrote:

Hi Fernando,

I hope you don't mind, but I've moved this over to the Haskell-beginners
mailing list, where I think this kind of question will be more
appropriate.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Galois Tech Talks: Stream Fusion for Haskell Arrays

2008-07-14 Thread Dan Weston
Slides, plus an audio recording of the talk would be great. With that, 
we could follow along easily.


Johan Tibell wrote:

On Sun, Jul 13, 2008 at 12:16 AM, Don Stewart [EMAIL PROTECTED] wrote:

johan.tibell:

On Sat, Jul 12, 2008 at 12:13 AM, Don Stewart [EMAIL PROTECTED] wrote:
Any possibility of you guys taping the talk?

Unlikely next week, but soon, yes!


How about the slides?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Newbie: Appending arrays?

2008-07-11 Thread Dan Weston

Dmitri O.Kondratiev wrote:
I need extendable array to store and count unique vectors. I have a file 
containing vectors presented as strings like:

10, 6, 80, 25, 6, 7
1, 2, 15, 17, 33, 22
21, 34, 56, 78, 91, 2
...
(BTW, what is the best library function to use to convert string of 
digits into a list or array?)


If you don't need to do error checking on the input syntax, the easiest 
(and arguably fastest) method is just read:


Prelude let x = 10, 6, 80, 25, 6, 7
Prelude read ([ ++ x ++ ]) :: [Int]
[10,6,80,25,6,7]

For error checking, you can use reads.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's wrong with the classes/insances?

2008-06-20 Thread Dan Weston

I think the problem is here:

 getCatalog :: Catalog catalog = a - catalog

This wants to constrain the result of getCatalog to be an instance of 
Catalog, but this only works for function arguments, not results. The 
following code does typecheck, though I have no idea what is does or if 
it is what you want:



type Id = String

class Catalog a where
listItems :: a - IO [String]
getItem :: a - Id - IO (Maybe String)

class Catalog q = Item q a where
getCatalog :: a - q

data Content d = MkContent {auteur  :: String,
inhoud  :: String,
catalog :: d}

instance Catalog c = Item c (Content c) where
   getCatalog (MkContent  _ _ e) = e



Pieter Laeremans wrote:

HI,

What 's wrong with this:

type Id = String

class Catalog a where
listItems :: a - IO [String]
getItem :: a - Id - IO (Maybe String)

class Item a where
getCatalog :: Catalog catalog = a - catalog

data Catalog c = Content c = Content {auteur :: String, inhoud::
String, catalog::c}

instance Catalog c = Item (Content c) where
   getCatalog (Content  _ _ c) = c

I get this as error from ghci:

Couldn't match expected type `catalog' against inferred type `c'
  `catalog' is a rigid type variable bound by
the type signature for `getCatalog'
  at
../Sites/liberaleswebsite/www.liberales.be/cgi-bin/Test.hs:16:26
  `c' is a rigid type variable bound by
  the instance declaration
at ../Sites/liberaleswebsite/www.liberales.be/cgi-bin/Test.hs:20:17
In the expression: c
In the definition of `getCatalog': getCatalog (Content _ _ c) = c
In the definition for method `getCatalog'
Failed, modules loaded: none.

thanks in advance,

P




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type constructor confusion

2008-06-19 Thread Dan Weston
If it helps, feel free to use a different name for the data constructors 
and their data type until the difference is painfully clear to you 
(maybe suffix the constructor with a C or prefix by Mk).


Data types and constructors live in different namespaces and can happily 
use the same identifier. That doesn't mean you have to, if it means more 
work deciphering error messages.


Computers would be just as happy with identifiers like g2ch_Sw'K54h 
(just take a look at the GHC simple core dump!), so pick ones that you 
yourself find perspicuous.


Dan

Ryan Ingram wrote:

It sounds like you need to split up your types a bit more.

data HttpRequest = HttpRequest ...

data HttpResponse = HttpResponse ...

data HttpMessage = MsgRequest HttpRequest | MsgResponse HttpResponse
-- alternatively
-- type HttpMessage = Either HttpRequest HttpResponse

Now you can have functions that take/return just an HttpRequest or
just an HttpResponse, as well as functions that use either one via
HttpMessage.  In the latter case, you do need to pattern match to
decide which one you have.

  -- ryan


On 6/18/08, Stephen Howard [EMAIL PROTECTED] wrote:

Thanks Brandon, forgot to send my reply to the list:

Ok, so I am confusing things.  Good to know.  So my question is how do I
fulfill this scenario?

- I have an action that might return either an HttpResponse or an
HttpRequest, depending on if the IO in the action determined more work
needed doing.  It's here, though I doubt it's correct yet:

requestHandler :: HttpRequest - IO HttpResponse
requestHandler request = do
 session - sessionHandler request
 ret - uriHandler request
 case ret of
 HttpResponse - ret
 HttpRequest  - resourceHandler session ret

uriHandler :: HttpRequest - IO HttpMessage
sessionHandler :: HttpRequest - IO HttpSession

I've given the uriHandler a signature of IO HttpMessage because the
HttpMessage might be either an HttpResponse or an HttpRequest, and I don't
know how I should be specifying that.  Ideas?

- Stephen

Brandon S. Allbery KF8NH wrote:

On Jun 18, 2008, at 15:31 , Stephen Howard wrote:



HttpMessage.hs:36:20: Not in scope: type constructor or class

`HttpRequest'

The troublesome line is the definition of the cookie function at the end

of the code.  I've made

Right.  HttpRequest is a data constructor associated with the type

constructor HttpMessage.

(Data constructors are effectively functions; you used it in the context

of a type, not a function name.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A simple beginner question

2008-06-03 Thread Dan Weston

There's always one more way to do things in Haskell! :)

Here's yet another way to get at the payloads in a list. You don't have 
to know how this works to use it:


data SampleType = A | B Int | C String

unA :: SampleType - [()]
unA A = return ()
unA _ = fail Not an A

unB :: SampleType - [Int]
unB (B b) = return b
unB _ = fail Not a B

unC :: SampleType - [String]
unC (C c) = return c
unC _ = fail Not a C

-- I can check for more than one constructor...
-- Note that a single type must be returned,
-- so for C I return e.g. the length of the string
unBorC :: SampleType - [Int]
unBorC (B b) = return b
unBorC (C c) = return (length c)
unBorC _ = fail Not a B or C

For lists, the = operator knows to ignore failure and collect anything 
else into a new list. The technobabble for this is that [] is a Monad.


*Main let sampleTypes = [A, B 5, C test, A, A, B 7, C go]
*Main sampleTypes = unA
[(),(),()]
*Main sampleTypes = unB
[5,7]
*Main sampleTypes = unC
[test,go]
*Main sampleTypes = unBorC
[5,4,7,2]

Adam Smyczek wrote:

Example:

data SampleType = A | B Int | C String | D --  etc.

sampleTypes = [A, B 5, C test] :: [SampleType]

How do I find for example element A in the sampleTypes list?
Do I have to create e.g.:

isA :: SampleType - Bool
isA A = True
isA _ = False

for every constructor and use find?
It feels like this is not the quicker method.

Thanks,
Adam


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] one-way monads

2008-05-21 Thread Dan Weston

Dan Doel wrote:

On Tuesday 20 May 2008, [EMAIL PROTECTED] wrote:

Actually, it's true less than 50% of the time.  In particular, it's
not true of any monad transformer.


Sure it is. Any particular transformer t typically comes with some particular 
way of writing a function of type t m a - m a (you may have to throw away 
some t-related stuff, of course).


Since a specific transformed monad is built from a specific monad, and a 
specific transformer, and specific transformers are likely to have a function 
of type t m a - m a, and specific monads are likely to have functions of 
type m a - a, you can compose them to get a function of type t m a - a for 
the specific monad t m. And so on for transformed-transformed monads. :)


That only fails if either of the specific pieces fails to have the right 
function, which happens well under 50% of the time, I think (IO and STM are 
the ones that immediately occur to me (barring a certain evil function), 
although you could make a case for ST by technicality; no failing 
transformers come to mind (except CCT if we're counting ST), but I haven't 
wracked my brain very hard).


-- Dan


The claim was less than 50% of the time, not less than 50% of the 
monads in the standard libraries. I wonder what fraction of monads in 
real code the IO monad alone accounts for? 50% does not seem implausible 
to me.


Dan Weston

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ubuntu and ghc

2008-05-21 Thread Dan Weston
Now you tell me! I also upgraded late last night and got the exact same 
problem. :(


I just uninstalled the ghc from the Update Manager and was going to 
reinstall tonight. Are you saying that something else is screwed up 
because of this?


Galchin, Vasili wrote:

Hello,

   https://bugs.launchpad.net/ubuntu/+source/gtk2hs/+bug/229489  
this is almost identical to my problem. I am just trying to help others 
on this list who are using Ubuntu Linux to avoid my predicament!


Kind regards, Vasili




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] relational data representation in memory using haskell?

2008-05-21 Thread Dan Weston
Consider SQLite [1], which is a software library that implements a 
self-contained, serverless, zero-configuration, transactional SQL 
database engine.


It is embeddable, can reside completely in memory (including the data), 
and can be saved and restored to disk when needed. It neatly fills the 
niche between maps and a client/server database model.


It has a C API which you can wrap as needed with the FFI, and you 
wouldn't need more than a dozen or so functions to start with (it 
understands SQL too).


[1] http://www.sqlite.org/

Marc Weber wrote:

On Wed, May 21, 2008 at 05:05:21PM -0700, Jeremy Shaw wrote:

At Thu, 22 May 2008 01:04:24 +0200,
Marc Weber wrote:


Some way representing relational data which is typically stored in
databases such as Postgresql..

Rewriting something like Postgresql in haskell would take ages..
So I'd be satisfied with having in memory representation only (this
would fit the HAppS state system very well .. :)
Are you familiar with the HAppS IxSet library? 

Yes - not with all that sybwith-class stuff though.
There are some issues:
its dynamic : doesn't this waste some CPU cycles?
no multi indexes..
maybe some space leaks because the data type containing the Maps is
build after each filter maybe leaving unevaluating chunks - Saizan has
told me about it on HAppS.. And you can't extend it to the degree I'd
like to (eg throw a query at it and let the system figure out which
indexes to use)
And last but not least: It does'nt support relations at all yet.
So all the effort adding / checking foreign keys etc has to be done
anyway.

Thanks Marc
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Write Haskell as fast as C.

2008-05-16 Thread Dan Weston

Ketil Malde wrote:

mkAnn :: ByteString - Annotation
mkAnn = pick . B.words
where pick (_db:up:rest) = pick' up $ getGo rest
  pick' up' (go:_:ev:_) = Ann (B.copy up') (read $ B.unpack go) (read $ 
B.unpack ev)
  getGo = dropWhile (not . B.isPrefixOf (pack GO:))


It seems at first face miraculously coincidental that the dropWhile in 
the getGo definition knows to stop dropping when there are exactly 4 
elements, in order to match the pattern in the second parameter of the 
pick' definition, whose argument is provided by (getGo Rest).


What magic makes this true? Just curious...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Write Haskell as fast as C.

2008-05-16 Thread Dan Weston

Dan Weston wrote:

Ketil Malde wrote:

mkAnn :: ByteString - Annotation
mkAnn = pick . B.words
where pick (_db:up:rest) = pick' up $ getGo rest
  pick' up' (go:_:ev:_) = Ann (B.copy up') (read $ B.unpack 
go) (read $ B.unpack ev)

  getGo = dropWhile (not . B.isPrefixOf (pack GO:))


It seems at first face miraculously coincidental that the dropWhile in 
the getGo definition knows to stop dropping when there are exactly 4 
elements, in order to match the pattern in the second parameter of the 
pick' definition, whose argument is provided by (getGo Rest).


What magic makes this true? Just curious...


I didn't mean exactly 4, but at least 3. Otherwise, I'm still 
curious! :)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Endianess

2008-05-14 Thread Dan Weston

Henning Thielemann wrote:


On Tue, 13 May 2008, Achim Schneider wrote:


Jed Brown [EMAIL PROTECTED] wrote:


It's not that simple with bits.  They lack consistency just like the
usual US date format and the way Germans read numbers.


So you claim that you pronounce 14 tenty-four? In German pronunciation
is completely uniform from 13 to 99.


http://www.verein-zwanzigeins.de/


So I've always wondered, if you are writing down a number being dictated 
(slowly) by someone else, like 234, do you write the 2, then leave space 
and write the 4, then go back and fill in with 3? Or do you push the 4 
onto the stack until the 3 arrives, and write 34 at once.


If the latter, does this imply that Germans have a harder time with tail 
recursion?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Richer (than ascii) notation for haskell source?

2008-05-14 Thread Dan Weston

Richard A. O'Keefe wrote:

At least to give editors a fighting chance of matching their concept of a
word with Haskell tokens, it might be better to use nabla instead of
lambda.  Other old APL fans may understand why (:-).  Alternatively, didn't
Church really want to use a character rather like a down tack, and have to
squish it to get a letter his printer was happy with?  Nah, nabla for me.


nablabot anyone? Nabla calculus? (But wait, in differential calculus 
nabla is a gradient operator, so let's rename that lambda).


And people misspelling nabla as nambla will find a surprise when they 
google it...


Better use a Unihan character instead.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Random numbers / monads - beginner question

2008-05-08 Thread Dan Weston

Henning Thielemann wrote:


On Thu, 8 May 2008, Madoc wrote:


minValue = 0::Int
maxValue = 1000::Int

normalize a | a  minValue = minValue
   | a  maxValue = maxValue
   | otherwise = a



normalize' = min maxValue . max minValue


There is a curiosity here. The functions normalize and normalize' are 
extensionally equal only because minValue = maxValue, but intensionally 
different. The intensional equivalent is to reverse order of composition:


normalize'' = max minValue . min maxValue

which remains equal to to normalize whatever the values of minValue and 
maxValue.


That the order of composition (or of guarded expressions) matters 
conditionally base on its parameters is reason enough for the original 
poster to decide what the right answer should be if maxValue  
minValue. These corner cases are often where future bugs lie dormant. My 
choice would be:


normalize'''= max trueMin . min trueMax
  where trueMin = min minValue maxValue
trueMax = max minValue maxValue

Now the function makes no assumptions about external values. This is no 
less efficient than before, since trueMin and trueMax are CAFs evaluated 
only once.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Induction (help!)

2008-05-07 Thread Dan Weston

Paul,

Sometimes it helps to go exhaustively through every detail to be sure 
there is no magic going on. Proceed only if you want exhaustive detail...


If it seems that people are skipping some steps in their argument, it is 
because they are! They already understand it so well that they forgot to 
add them. Here is what I believe is an excruciatingly complete proof, to 
assure you that no handwaving is going on.


The values of Nat are defined inductively on their structure, using its 
initial algebra. Take the data type definition


  data Nat = Zero | Succ Nat

There is actually an implied undefined as well, so this is really

  Nat = undefined | Zero | Succ Nat

Just as 3 = const 3 x for any x, we can convert from pointed to 
pointfree notation (i.e. go from a recursive definition to a 
least-defined fixed point definition):


  f = const undefined | const Zero | Succ
  Nat = LFP (m - infinity) ( f^m undefined )

[ For the notationally picky, I am using | instead of + in the functor 
for clarity, which causes no ambiguity.]


Because we are overachievers, we will prove our theorem not just for the 
least-defined fixed point of f, but change the limit to a union and 
prove it for (f^m undefined) for every m, which includes the fixed point.


Why the extra work? Because now we have an inductive argument. The base 
case is undefined, and each step is another application of f.


DEFINITION:

add Zero n = n -- Line 1
add (Succ m) n = Succ (add m n)-- Line 2

THEOREM:

forall x :: Nat, add x Zero = x

PROOF: By induction

 BASE CASE:  x = f^0 undefined = undefined

  add undefined Zero = undefined
{ Line 1, strict pattern match failure in first arg }

 STEP CASE:  Assume add x Zero = x, Prove add y Zero = y where y in f x

   What y are in f x?
   f x = (const undefined | const Zero | Succ) x
   = const undefined x | const Zero x | Succ x
   = undefined | Zero | Succ x

   We have to consider each of these possibilities for y.

   1) add undefined Zero = undefined   { Base case }

   2) add Zero  Zero = Zero{ Def. line 1 }

   3) add (Succ x) Zero
= Succ (add x Zero){ Def. line 2 }
= Succ x   { Step case assumption }

   This exhausts the three cases for y.

Therefore, by induction add x Zero = x for all x :: Nat

Note how structural induction maps to induction over integers. You do 
not have to enumerate some flattened form of the domain and do induction 
over their enumerated order. Indeed, some domains (like binary trees) 
are not even countably infinite, so the induction hypothesis would not 
be valid when used in this way.


Instead you rely on the fact that all algebraic data types already have 
an (at most countably infinite) complete partial order based on 
constructor application (or eqivalently, initial algebra composition) 
[and always remembering that there is an implied union in | with 
undefined], grounded at undefined, and that consequently you can do 
induction over constructor application.


I hope that there are no errors in the above and that I have not left 
out even the smallest detail. You should be able to see from the above 
that structural induction works on any algebraic data type.


Obviously, after the first time only a masochist would be so verbose. 
But the induction hypothesis does after all require a first time! :)


Dan Weston

PR Stanley wrote:

Hi
One of you chaps mentioned the Nat data type
data Nat = Zero | Succ Nat

Let's have
add :: Nat - Nat - Nat
add Zero n = n
add (Succ m)n = Succ (add m n)

Prove
add m Zero = m

I'm on the verge of giving up on this. :-(
Cheers
Paul


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Induction (help!)

2008-05-06 Thread Dan Weston

Ryan Ingram wrote:


One point to remember is that structural induction fails to hold on
infinite data structures:


As I understand it, structural induction works even for infinite data 
structures if you remember that the base case is always _|_. [1]


Let the initial algebra functor F = const Zero | Succ

We have to prove that:
P(_|_)  holds
if P(X) holds then P(F(X)) holds

Here, we see that for P(x) = (x /= Succ x), since

infinity = Succ infinity = _|_

then P(_|_) does not hold (since _|_ = Succ _|_ = _|_)

As a counterexample, we can prove e.g. that

length (L ++ M) = length L + length M

See [2] for details, except that they neglect the base case P(_|_) and 
start instead with the F^1 case of P([]), so their proof is valid only 
for finite lists. We can include the base case too:


length (_|_ ++ M) = length _|_ + length M
length (_|_ ) = _|_+ length M
_|_   = _|_

and similarly for M = _|_ and both _|_.

So this law holds even for infinite lists.

[1] Richard Bird, Introduction to Functional Programming using 
Haskell, p. 67


[2] http://en.wikipedia.org/wiki/Structural_induction

Also note that


data Nat = Zero | Succ Nat deriving (Eq, Show)

Take P(x) to be (x /= Succ x).

P(0):
Zero /= Succ Zero. (trivial)

P(x) = P(Succ x)

So, given P(x) which is: (x /= Succ x),
we have to prove P(Succ x): (Succ x /= Succ (Succ x))

In the derived Eq typeclass:
   Succ a /= Succ b = a /= b
Taking x for a and Succ x for b:
   Succ x /= Succ (Succ x) = x /= Succ x
which is P(x).

Now, consider the following definition:
infinity :: Nat
infinity = Succ infinity

infinity /= Succ infinity == _|_; and if you go by definitional
equality, infinity = Succ infinity, so even though P(x) holds on all
natural numbers due to structural induction, it doesn't hold on this
infinite number.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unapplying function definitions?

2008-05-05 Thread Dan Weston

Hi Paul!

I'm not sure about the context of Hutton, but maybe unapplying 
functions refers to the principle of extensionality.


Leibnitz' rule of the indiscernibility of identicals [1] says that if 
two functions are equal, then the respective results of applying each to 
*any* value of their common domain are equal:


f, g :: a - b and f = g  ==  forall (x :: a) . f x = g x

Since Haskell contains the undefined value in every type, this applies 
as well to undefined: f x and g x must either both be undefined or equal.


We can keep going from left to right, so that for any validly typed y, f 
x y = g x y, f x y z = g x y z, etc.


The converse of this is also true, and called the principle of 
extensionality:


Two functions can be considered equal if they have a common domain 
(type) and if, for each value in that domain (type), which in Haskell 
includes undefined, they give the same result.


So if we have two functions f and g, and we know [2] that for *every* z 
(i.e. z is a free variable or a universally quantified variable) that


f x y z = g x y z   ==   f x y = g x y

We can keep going from right to left: if in addition, this is true for 
all y, then f x = g x. And finally, if this is true for all x, then f = g.


Note that Leibnitz allows for any argument, extensionality requires 
equality for every argument.


Dan Weston

[1] 
http://en.wikipedia.org/wiki/Identity_of_indiscernibles#Identity_and_indiscernibility


[2] We know this because e.g. there is some definition or theorem saying 
so. We cannot compute this (even for finite domains) by trying each 
value. They need to give the same result even for undefined arguments, 
so that you cannot give a computable extensional definition of function 
equality even for finite domains, since if one function doesn't halt 
when applied to 3, the other must also not halt, and you can't wait for 
ever to be sure this is true.


PR Stanley wrote:

Hi
What on earth is unapplying function definitions?
The following is taken from chapter 13 of the Hutton book:
...when reasoning about programs, function definitions can be both 
applied from left to right and unapplied from right to left.


Cheers
Paul



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] My try for a CoroutineT monad tranformer

2008-04-25 Thread Dan Weston
I guess like minds think alike! See the very recent e-mail thread 
started by Ryan Ingram:

http://thread.gmane.org/gmane.comp.lang.haskell.cafe/39155/focus=39159

Take a look at the code referenced in Luke Palmer's reply:
http://luqui.org/git/?p=luqui-misc.git;a=blob;f=work/code/haskell/frp/Fregl/Suspend.hs

A snippet follows:

 class (Monad m) = MonadSuspend v m | m - v where
   attempt :: m a - m (Either a (v - m a))
   suspend :: m v

 newtype SuspendT v m a
   = SuspendT { runSuspendT :: m (Either a (v - SuspendT v m a)) }

Your (Coroutine m a) appears to be isomorphic to (SuspendT () m a) [so 
Coroutine m () = SuspendT () m ()]


Your runCoroutineT appears to be isomorphic to a specialization of 
runSuspendT:


runSuspendT' :: SuspendT () m () - m (Either () (() - SuspendT () m ()))

Here the () - a ~ a and Either () a ~ Maybe a

Dan

Joachim Breitner wrote:

Hi,

(for those who follow planet.haskell.org this is old news, but I thought
I’d tell the others)

In
http://www.joachim-breitner.de/blog/archives/291-Pausable-IO-actions-for-better-GUI-responsiveness.html
I describe how I wrote a monad transformer that allows me to pause a
computation from within by returning another computation that I can use
to re-start the computation (or to throw it away if I want). I needed
this for a long running drawing computation in a gtk2hs program that I
want to pause at convenient points (to allow user interaction), and that
I need to abort when what I’m drawing is not up-to-date anymore.

The API basically consists of the function

runCoroutineT :: Monad m = CoroutineT m () - m (Maybe (CoroutineT m ()))

which runs the pausable computation, any Maybe returns Just the resume
action, and the function

pause :: Monad m = CoroutineT m ()

to be used inside the computation, which pauses it.

I have put the complete module in the darcs repository that might later
also contain the GUI program at http://darcs.nomeata.de/FrakView/

What do you think of CoroutineT? Could it have been easily implemented
using the existing monad transformers? Is it useful enough so that it
should be included somewhere, and where? Are there any problems with
strictness or other tripping points that I have overlooked?

Greetings,
Joachim




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] My try for a CoroutineT monad tranformer

2008-04-25 Thread Dan Weston
Is there a Haskell Wiki page (or blog) on Monad Suspension? This looks 
like a nice paradigm that apfelmus points out can be used to 
considerably shorten your code, but only if the rest of us learn how!


If not, maybe someone can be persuaded to write one?

Dan

Joachim Breitner wrote:

Hi,

Am Freitag, den 25.04.2008, 11:49 -0700 schrieb Dan Weston:
I guess like minds think alike! See the very recent e-mail thread 
started by Ryan Ingram:

http://thread.gmane.org/gmane.comp.lang.haskell.cafe/39155/focus=39159

Take a look at the code referenced in Luke Palmer's reply:
http://luqui.org/git/?p=luqui-misc.git;a=blob;f=work/code/haskell/frp/Fregl/Suspend.hs

A snippet follows:

  class (Monad m) = MonadSuspend v m | m - v where
attempt :: m a - m (Either a (v - m a))
suspend :: m v
 
  newtype SuspendT v m a
= SuspendT { runSuspendT :: m (Either a (v - SuspendT v m a)) }

Your (Coroutine m a) appears to be isomorphic to (SuspendT () m a) [so 
Coroutine m () = SuspendT () m ()]


Your runCoroutineT appears to be isomorphic to a specialization of 
runSuspendT:


runSuspendT' :: SuspendT () m () - m (Either () (() - SuspendT () m ()))

Here the () - a ~ a and Either () a ~ Maybe a


You are quite right, it really is the same thing. The implementation
behind runCoroutineT is not just a specialization, but the exact same
thing (with Left and Right switched). I just put the specialization
there because I had no need for a return value in my use case.

And interesting how Ryan and me had the same thoughts on the same day.
Maybe the April 24th should be considered Suspend You Monadic Action
Day.

Greetings,
Joachim





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   >