Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread Ryan Ingram
Oleg, do you have any references for the extension of lambda-encoding of
data into dependently typed systems?

In particular, consider Nat:

nat_elim :: forall P:(Nat - *). P 0 - (forall n:Nat. P n - P (succ
n)) - (n:Nat) - P n

The naive lambda-encoding of 'nat' in the untyped lambda-calculus has
exactly the correct form for passing to nat_elim:

nat_elim pZero pSucc n = n pZero pSucc

with

zero :: Nat
zero pZero pSucc = pZero

succ :: Nat - Nat
succ n pZero pSucc = pSucc (n pZero pSucc)

But trying to encode the numerals this way leads to Nat referring to its
value in its type!

   type Nat = forall P:(Nat  - *). P 0 - (forall n:Nat. P n - P (succ
n)) - P ???

Is there a way out of this quagmire?  Or are we stuck defining actual
datatypes if we want dependent types?

  -- ryan


On Tue, Sep 18, 2012 at 1:27 AM, o...@okmij.org wrote:


 There has been a recent discussion of ``Church encoding'' of lists and
 the comparison with Scott encoding.

 I'd like to point out that what is often called Church encoding is
 actually Boehm-Berarducci encoding. That is, often seen

  newtype ChurchList a =
  CL { cataCL :: forall r. (a - r - r) - r - r }

 (in http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs )

 is _not_ Church encoding. First of all, Church encoding is not typed
 and it is not tight. The following article explains the other
 difference between the encodings

 http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html

 Boehm-Berarducci encoding is very insightful and influential. The
 authors truly deserve credit.

 P.S. It is actually possible to write zip function using Boehm-Berarducci
 encoding:
 http://okmij.org/ftp/ftp/Algorithms.html#zip-folds




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread Ryan Ingram
Fascinating!

But it looks like you still 'cheat' in your induction principles...

×-induction : ∀{A B} {P : A × B → Set}
→ ((x : A) → (y : B) → P (x , y))
→ (p : A × B) → P p
×-induction {A} {B} {P} f p rewrite sym (×-η p) = f (fst p) (snd p)

Can you somehow define

x-induction {A} {B} {P} f p = p (P p) f


On Tue, Sep 18, 2012 at 4:09 PM, Dan Doel dan.d...@gmail.com wrote:

 This paper:

 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.957

 Induction is Not Derivable in Second Order Dependent Type Theory,
 shows, well, that you can't encode naturals with a strong induction
 principle in said theory. At all, no matter what tricks you try.

 However, A Logic for Parametric Polymorphism,

 http://www.era.lib.ed.ac.uk/bitstream/1842/205/1/Par_Poly.pdf

 Indicates that in a type theory incorporating relational parametricity
 of its own types,  the induction principle for the ordinary
 Church-like encoding of natural numbers can be derived. I've done some
 work here:

 http://code.haskell.org/~dolio/agda-share/html/ParamInduction.html

 for some simpler types (although, I've been informed that sigma was
 novel, it not being a Simple Type), but haven't figured out natural
 numbers yet (I haven't actually studied the second paper above, which
 I was pointed to recently).

 -- Dan

 On Tue, Sep 18, 2012 at 5:41 PM, Ryan Ingram ryani.s...@gmail.com wrote:
  Oleg, do you have any references for the extension of lambda-encoding of
  data into dependently typed systems?
 
  In particular, consider Nat:
 
  nat_elim :: forall P:(Nat - *). P 0 - (forall n:Nat. P n - P (succ
  n)) - (n:Nat) - P n
 
  The naive lambda-encoding of 'nat' in the untyped lambda-calculus has
  exactly the correct form for passing to nat_elim:
 
  nat_elim pZero pSucc n = n pZero pSucc
 
  with
 
  zero :: Nat
  zero pZero pSucc = pZero
 
  succ :: Nat - Nat
  succ n pZero pSucc = pSucc (n pZero pSucc)
 
  But trying to encode the numerals this way leads to Nat referring to
 its
  value in its type!
 
 type Nat = forall P:(Nat  - *). P 0 - (forall n:Nat. P n - P (succ
 n))
  - P ???
 
  Is there a way out of this quagmire?  Or are we stuck defining actual
  datatypes if we want dependent types?
 
-- ryan
 
 
 
  On Tue, Sep 18, 2012 at 1:27 AM, o...@okmij.org wrote:
 
 
  There has been a recent discussion of ``Church encoding'' of lists and
  the comparison with Scott encoding.
 
  I'd like to point out that what is often called Church encoding is
  actually Boehm-Berarducci encoding. That is, often seen
 
   newtype ChurchList a =
   CL { cataCL :: forall r. (a - r - r) - r - r }
 
  (in
 http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs )
 
  is _not_ Church encoding. First of all, Church encoding is not typed
  and it is not tight. The following article explains the other
  difference between the encodings
 
  http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html
 
  Boehm-Berarducci encoding is very insightful and influential. The
  authors truly deserve credit.
 
  P.S. It is actually possible to write zip function using
 Boehm-Berarducci
  encoding:
  http://okmij.org/ftp/ftp/Algorithms.html#zip-folds
 
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread Ryan Ingram
On Tue, Sep 18, 2012 at 8:39 PM, Dan Doel dan.d...@gmail.com wrote:

 On Tue, Sep 18, 2012 at 11:19 PM, Ryan Ingram ryani.s...@gmail.com
 wrote:
  Fascinating!
 
  But it looks like you still 'cheat' in your induction principles...
 
  ×-induction : ∀{A B} {P : A × B → Set}
  → ((x : A) → (y : B) → P (x , y))
  → (p : A × B) → P p
  ×-induction {A} {B} {P} f p rewrite sym (×-η p) = f (fst p) (snd p)
 
  Can you somehow define
 
  x-induction {A} {B} {P} f p = p (P p) f

 No, or at least I don't know how.

 The point is that with parametricity, I can prove that if p : A × B,
 then p = (fst p , snd p). Then when I need to prove P p, I change the
 obligation to P (fst p , snd p). But i have (forall x y. P (x , y))
 given.

 I don't know why you think that's cheating. If you thought it was
 going to be a straight-forward application of the Church (or some
 other) encoding, that was the point of the first paper (that's
 impossible). But parametricity can be used to prove statements _about_
 the encodings that imply the induction principle.


The line of logic I was thinking is that you could lift the parametricity
proof to the (proof-irrelevant) typelevel; that is, you rewrite the type of
f from

f :: (x : A) - (y : B) - P (x,y)

to

f :: A - B - P p

given that p = (x,y), to make f 'compatible' with p.  But I see the
trickiness, because if you allow yourself to do that, you no longer are
constrained to pass (fst p) and (snd p) to f, you could pass any objects of
type A and B.

Wait, can't you just use x-lemma1 to rewrite the rhs of x-induction?

×-lemma₁ : ∀{A B R} → (p : A × B) (k : A → B → R)
 → k (fst p) (snd p) ≡ p R k


x-lemma1 {A} {B} {P p} p f :: f (fst p) (snd p) = p (P p) f

Or is the problem that the k parameter to x-lemma1 isn't 'dependently
typed' enough?

  -- ryan




  On Tue, Sep 18, 2012 at 4:09 PM, Dan Doel dan.d...@gmail.com wrote:
 
  This paper:
 
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.957
 
  Induction is Not Derivable in Second Order Dependent Type Theory,
  shows, well, that you can't encode naturals with a strong induction
  principle in said theory. At all, no matter what tricks you try.
 
  However, A Logic for Parametric Polymorphism,
 
  http://www.era.lib.ed.ac.uk/bitstream/1842/205/1/Par_Poly.pdf
 
  Indicates that in a type theory incorporating relational parametricity
  of its own types,  the induction principle for the ordinary
  Church-like encoding of natural numbers can be derived. I've done some
  work here:
 
  http://code.haskell.org/~dolio/agda-share/html/ParamInduction.html
 
  for some simpler types (although, I've been informed that sigma was
  novel, it not being a Simple Type), but haven't figured out natural
  numbers yet (I haven't actually studied the second paper above, which
  I was pointed to recently).
 
  -- Dan
 
  On Tue, Sep 18, 2012 at 5:41 PM, Ryan Ingram ryani.s...@gmail.com
 wrote:
   Oleg, do you have any references for the extension of lambda-encoding
 of
   data into dependently typed systems?
  
   In particular, consider Nat:
  
   nat_elim :: forall P:(Nat - *). P 0 - (forall n:Nat. P n - P
   (succ
   n)) - (n:Nat) - P n
  
   The naive lambda-encoding of 'nat' in the untyped lambda-calculus has
   exactly the correct form for passing to nat_elim:
  
   nat_elim pZero pSucc n = n pZero pSucc
  
   with
  
   zero :: Nat
   zero pZero pSucc = pZero
  
   succ :: Nat - Nat
   succ n pZero pSucc = pSucc (n pZero pSucc)
  
   But trying to encode the numerals this way leads to Nat referring to
   its
   value in its type!
  
  type Nat = forall P:(Nat  - *). P 0 - (forall n:Nat. P n - P
 (succ
   n))
   - P ???
  
   Is there a way out of this quagmire?  Or are we stuck defining actual
   datatypes if we want dependent types?
  
 -- ryan
  
  
  
   On Tue, Sep 18, 2012 at 1:27 AM, o...@okmij.org wrote:
  
  
   There has been a recent discussion of ``Church encoding'' of lists
 and
   the comparison with Scott encoding.
  
   I'd like to point out that what is often called Church encoding is
   actually Boehm-Berarducci encoding. That is, often seen
  
newtype ChurchList a =
CL { cataCL :: forall r. (a - r - r) - r - r }
  
   (in
   http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs)
  
   is _not_ Church encoding. First of all, Church encoding is not typed
   and it is not tight. The following article explains the other
   difference between the encodings
  
  
 http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html
  
   Boehm-Berarducci encoding is very insightful and influential. The
   authors truly deserve credit.
  
   P.S. It is actually possible to write zip function using
   Boehm-Berarducci
   encoding:
   http://okmij.org/ftp/ftp/Algorithms.html#zip-folds
  
  
  
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http

Re: [Haskell-cafe] guards in applicative style

2012-09-17 Thread Ryan Ingram
Not exactly what you asked for, but...

filter (uncurry somePredicate) $ (,) $ list1 * list2

does the job.

Using only applicative operations, it's impossible to affect the 'shape' of
the result--this is the difference in power between applicative and monad.

  -- ryan


On Wed, Sep 12, 2012 at 7:40 AM, felipe zapata tifonza...@gmail.com wrote:

 Hi Haskellers,

 Suppose I have two list and I want to calculate
 the cartesian product between the two of them,
 constrained to a predicate.
 In List comprehension notation is just

 result = [ (x, y) | x - list1, y -list2, somePredicate x y ]

 or in monadic notation

 result = do
  x - list1
  y - list2
  guard (somePredicate x y)
 return $ (x,y)

 Then I was wondering if we can do something similar using an applicative
 style

 result = (,) $ list1 * list2 (somePredicate ???)

 The question is then,
 there is a way for defining a guard in applicative Style?

 Thanks in advance,

 Felipe Zapata.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type error with Type families

2012-09-17 Thread Ryan Ingram
The problem is that the function 'element' is ambiguous, for the reasons
MigMit pointed out.

The standard solution to this problem is to add a dummy argument to fix the
type argument to the type function:

data Proxy a = Proxy

class ... = ReplaceOneOf full where
type Item full ::  *

-- implementations can just ignore the first argument
element :: Proxy full - Item full - [Item full] - Bool

replaceOneOf :: ...
...
| element (Proxy :: Proxy full) x from = ...

Now the choice of which 'element' to use can be determined by the type of
the proxy.

  -- ryan

On Sun, Sep 16, 2012 at 4:05 AM, Marco Túlio Pimenta Gontijo 
marcotmar...@gmail.com wrote:

 Hi.

 I cannot make this program type check:

 {-# LANGUAGE TypeFamilies, FlexibleContexts #-}
 import qualified Data.ListLike as LL

 class LL.ListLike full (Item full) = ReplaceOneOf full where
   type Item full :: *
   replaceOneOf :: [Item full] - full - full - full
   replaceOneOf from to list
 | LL.null list = list
 | x `element` from
   = LL.concat [to, replaceOneOf from to $ LL.dropWhile
 (`element` from) xs]
 | otherwise = LL.cons x $ replaceOneOf from to xs
 where
   x = LL.head list
   xs = LL.tail list
   element :: Item full - [Item full] - Bool

 The error message is:

 Line 9: 1 error(s), 0 warning(s)

 Could not deduce (Item full0 ~ Item full)
 from the context (ReplaceOneOf full)
   bound by the class declaration for `ReplaceOneOf'
   at /home/marcot/tmp/test_flymake.hs:(4,1)-(15,45)
 NB: `Item' is a type function, and may not be injective
 Expected type: [Item full0]
   Actual type: [Item full]
 In the second argument of `element', namely `from'
 In the expression: x `element` from

 I have tried using asTypeOf, but it did not work:

 {-# LANGUAGE TypeFamilies, FlexibleContexts, ScopedTypeVariables #-}
 import qualified Data.ListLike as LL

 class LL.ListLike full (Item full) = ReplaceOneOf full where
   type Item full :: *
   replaceOneOf :: Item full - [Item full] - full - full - full
   replaceOneOf xt from to list
 | LL.null list = list
 | (x `asTypeOf` xt) `element` from
   = LL.concat [to, replaceOneOf xt from to $ LL.dropWhile
 (`element` from) xs]
 | otherwise = LL.cons x $ replaceOneOf xt from to xs
 where
   x = LL.head list
   xs = LL.tail list
   element :: Item full - [Item full] - Bool

 Can someone point me to a solution?

 Greetings.

 --
 marcot
 http://marcot.eti.br/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type variable in class instance

2012-09-10 Thread Ryan Ingram
From the point of view of the language, Message () and Message Int and
Message Player are all completely distinct types and may have different
behavior--there's no way for it to know that they all have the same
representation that only contains a String.

The derived Typeable instance for Message m is really a derived instance
of Typeable1 Message along with the generic instance (Typeable1 f,
Typeable a) = Typeable (m a) in Data.Typeable.

So you need to specify the type of message you want, or drop the type
parameter from Message.

A simpler answer, though, would just be to put the functions in the
typeclass.

class Event e where
viewEvent :: e - IO ()

instance Event Player where
viewEvent (Player a) = putStrLn $ show a
instance Event (Message m) where
viewEvent (Message s) = putStrLn s

In this case, the instance makes it clear that the type parameter is
irrelevant and puts no constraints on it.  And the type of viewEvent is
exactly the same as you were asking for: Event e = e - IO ().

  -- ryan

On Mon, Sep 10, 2012 at 3:06 PM, Corentin Dupont
corentin.dup...@gmail.comwrote:

 Hi Stephen,
 I wasn't aware of Data.Dynamic.
 I tried:

 *viewEvent :: Dynamic - IO ()
 viewEvent event = do
case fromDynamic event of
 Nothing - return ()

 Just (Message s) - putStrLn $ show s
 *

 But still got the same error (Ambiguous type variable `t0' in the
 constraint:  (Typeable t0) arising from a use of `fromDynamic')...

 Best,
 Corentin



 On Mon, Sep 10, 2012 at 11:33 PM, Stephen Tetley stephen.tet...@gmail.com
  wrote:

 Whilst dynamic typing isn't idiomatic for Haskell, it seems like
 you've decided you want it. So why not use Data.Dynamic rather than
 roll you're own dynamic typing with Typeable?



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type variable in class instance

2012-09-10 Thread Ryan Ingram
By the way, if you *really* want to do it your way, you can inspect the
typeOf the event directly and look for your Message type (using
typeRepTyCon from Data.Typeable), then unsafeCoerce into Message () to
extract the String.

import Unsafe.Coerce
import Data.Typeable

tyConMessage :: TyCon
tyConMessage = typeRepTyCon $ typeOf ( undefined :: Message () )

getMessageContents :: Event e = e - Maybe String
getMessageContents e
   | typeRepTyCon (typeOf e) == tyConMessage = Just $ case (unsafeCoerce e
:: Message ()) of Message s - s
   | otherwise = Nothing

But I strongly recommend *not* doing it this way :)

  -- ryan

On Mon, Sep 10, 2012 at 4:03 PM, Ryan Ingram ryani.s...@gmail.com wrote:

 From the point of view of the language, Message () and Message Int and
 Message Player are all completely distinct types and may have different
 behavior--there's no way for it to know that they all have the same
 representation that only contains a String.

 The derived Typeable instance for Message m is really a derived instance
 of Typeable1 Message along with the generic instance (Typeable1 f,
 Typeable a) = Typeable (m a) in Data.Typeable.

 So you need to specify the type of message you want, or drop the type
 parameter from Message.

 A simpler answer, though, would just be to put the functions in the
 typeclass.

 class Event e where
 viewEvent :: e - IO ()

 instance Event Player where
 viewEvent (Player a) = putStrLn $ show a
 instance Event (Message m) where
 viewEvent (Message s) = putStrLn s

 In this case, the instance makes it clear that the type parameter is
 irrelevant and puts no constraints on it.  And the type of viewEvent is
 exactly the same as you were asking for: Event e = e - IO ().

   -- ryan

 On Mon, Sep 10, 2012 at 3:06 PM, Corentin Dupont 
 corentin.dup...@gmail.com wrote:

 Hi Stephen,
 I wasn't aware of Data.Dynamic.
 I tried:

 *viewEvent :: Dynamic - IO ()
 viewEvent event = do
case fromDynamic event of
 Nothing - return ()

 Just (Message s) - putStrLn $ show s
 *

 But still got the same error (Ambiguous type variable `t0' in the
 constraint:  (Typeable t0) arising from a use of `fromDynamic')...

 Best,
 Corentin



 On Mon, Sep 10, 2012 at 11:33 PM, Stephen Tetley 
 stephen.tet...@gmail.com wrote:

 Whilst dynamic typing isn't idiomatic for Haskell, it seems like
 you've decided you want it. So why not use Data.Dynamic rather than
 roll you're own dynamic typing with Typeable?



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rigid skolem type variable escaping scope

2012-08-24 Thread Ryan Ingram
System Fc has another name: GHC Core.   You can read it by running 'ghc
-ddump-ds' (or, if you want to see the much later results after
optimization, -ddump-simpl):

For example:

NonGADT.hs:

{-# LANGUAGE TypeFamilies, ExistentialQuantification, GADTs #-}
module NonGADT where

data T a = (a ~ ()) = T
f :: T a - a
f T = ()

x :: T ()
x = T

C:\haskellghc -ddump-ds NonGADT.hs
[1 of 1] Compiling NonGADT  ( NonGADT.hs, NonGADT.o )

 Desugar (after optimization) 
Result size = 17

NonGADT.f :: forall a_a9N. NonGADT.T a_a9N - a_a9N
[LclIdX]
NonGADT.f =
  \ (@ a_acv) (ds_dcC :: NonGADT.T a_acv) -
case ds_dcC of _ { NonGADT.T rb_dcE -
GHC.Tuple.() `cast` (Sym rb_dcE :: () ~# a_acv)
}

NonGADT.x :: NonGADT.T ()
[LclIdX]
NonGADT.x = NonGADT.$WT @ () (GHC.Types.Eq# @ * @ () @ () @~ ())

The @ is type application; cast is a system Fc feature that that allows
type equality to be witnessed by terms;

Removing the module names and renaming things to be a bit more readable, we
get:

f :: forall a. T a - a
f = \ (@ a) (x :: T a) - case x of { T c - () `cast` (Sym c :: () ~# a) }
-- Here ~# is an unboxed type equality witness; it gets erased during
compilation.
-- We need Sym c because c :: a ~# () and cast wants to go from () to a, so
the
-- compiler uses Sym to swap the order.

x :: T ()
x = T @() (Eq# @* @() @() @~ ())
-- Eq# seems to be the constructor for ~#; I'm not sure what the ()
syntax is.
-- Notice the kind polymorphism; Eq# takes a kind argument as its first
-- argument, then two type arguments of that kind.

  -- ryan

On Fri, Aug 24, 2012 at 2:39 AM, Matthew Steele mdste...@alum.mit.eduwrote:

 On Aug 23, 2012, at 10:32 PM, wren ng thornton wrote:

  Now, be careful of something here. The reason this fails is because
 we're compiling Haskell to System Fc, which is a Church-style lambda
 calculus (i.e., it explicitly incorporates types into the term language).
 It is this fact of being Church-style which forces us to instantiate ifn
 before we can do case analysis on it. If, instead, we were compiling
 Haskell down to a Curry-style lambda calculus (i.e., pure lambda terms,
 with types as mere external annotations) then everything would work fine.
 In the Curry-style world we wouldn't need to instantiate ifn at a specific
 type before doing case analysis, so we don't have the problem of magicking
 up a type. And, by parametricity, the function fn can't do anything
 particular based on the type of its argument, so we don't have the problem
 of instantiating too early[1].

 Okay, I think that's what I was looking for, and that makes perfect sense.
  Thank you!

  Of course, (strictly) Curry-style lambda calculi are undecidable after
 rank-2 polymorphism, and the decidability at rank-2 is pretty wonky. Hence
 the reason for preferring to compile down to a Church-style lambda
 calculus. There may be some intermediate style which admits your code and
 also admits the annotations needed for inference/checking, but I don't know
 that anyone's explored such a thing. Curry-style calculi tend to be
 understudied since they go undecidable much more quickly.

 Gotcha.  So if I'm understanding correctly, it sounds like the answer to
 one of my earlier questions is (very) roughly, Yes, the original version
 of bar would be typesafe at runtime if the typechecker were magically able
 to allow it, but GHC doesn't allow it because trying to do so would get us
 into undecidability nastiness and isn't worth it.  Which is sort of what I
 expected, but I couldn't figure out why; now I know.

 I think now I will go refresh myself on System F (it's been a while) and
 read up on System Fc (which I wasn't previously aware of). (-:

 Cheers,
 -Matt


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Flipping type constructors

2012-08-20 Thread Ryan Ingram
It seems really hard to solve this, since the type checker works before
instance selection has had a chance to do anything.

Instead of looking at the instance declaration, look at the use site:

   lift x

expects the argument to have type

   x :: t m a

for some t :: (* - *) - * - *, m :: * - *, and a :: *; it's not until t
is known that we can do instance selection, and in your case, EitherT M A B
doesn't have the required form and so is a type error already.

I think the best answer is sadly to just have to have a (kind-polymorphic!)
newtype flip and deal with it.

I can imagine there being some way to (ab)use kind polymorphism to redefine
MonadTrans in a way that allows the 'm' argument to appear in more places
in the target type, but I'm not clever enough to come up with a proposal
for how to do so.

  -- ryan


On Mon, Aug 13, 2012 at 4:38 PM, Tony Morris tonymor...@gmail.com wrote:

 I have a data-type that is similar to EitherT, however, I have ordered
 the type variables like so:

 data EitherT (f :: * - *) (a :: *) (b :: *) = ...

 This allows me to declare some desirable instances:

 instance Functor f = Bifunctor (EitherT f)
 instance Foldable f = Bifoldable (EitherT f)
 instance Traversable f = Bitraversable (EitherT f)

 However, I am unable to declare a MonadTrans instance:

 instance MonadTrans (EitherT a) -- kind error

 I looked at Control.Compose.Flip to resolve this, but it does not appear
 to be kind-polymorphic.

 http://hackage.haskell.org/packages/archive/TypeCompose/0.9.1/doc/html/src/Control-Compose.html#Flip

 I was wondering if there are any well-developed techniques to deal with
 this? Of course, I could just write my own Flip with the appropriate
 kinds and be done with it. Maybe there is a more suitable way?


 --
 Tony Morris
 http://tmorris.net/



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-20 Thread Ryan Ingram
Also, I have to admit I was a bit handwavy here; I meant P in a
metatheoretic sense, that is P(a) is some type which contains 'a' as a
free variable, and thus the 'theorem' is really a collection of theorems
parametrized on the P you choose.

For example, P(a) could be Show a  a - Int; in that case we get the
theorem

exists a. (Show a, a - Int)
 =
forall r. (forall a. Show a = (a - Int) - r) - r

as witnessed by the following code (using the ExistentialQuantification and
RankNTypes extensions)

data P = forall a. Show a = MkP (a - Int)
type CPS_P r = (forall a. Show a = (a - Int) - r) - r

isoR :: P - forall r. CPS_P r
isoR (MkP f) k =
   -- pattern match on MkP brings a fresh type T into scope,
   -- along with f :: T - Int, and the constraint Show T.
   -- k :: forall a. Show a = (a - Int) - r
   -- so, k {T} f :: r


isoL :: (forall r. CPS_P r) - P
isoL k = k (\x - MkP x)
-- k :: forall r. (forall a. Show a = (a - Int) - r) - r
-- k {P} = (forall a. Show a = (a - Int) - P) - P
-- MkP :: forall a. Show a = (a - Int) - P
-- therefore, k {P} MkP :: P

Aside: the type 'exists a. (Show a, a - Int)' is a bit odd, and is another
reason we don't have first class existentials in Haskell.  The 'forall'
side is using currying (a - b - r) = ((a,b) - r) which works because
the constraint = can be modeled by dictionary passing.  But we don't have
a simple way to represent the dictionary (Show a) as a member of a tuple.

One answer is to pack it up in another existential; I find this a bit of
a misnomer since there's nothing existential about this data type aside
from the dictionary:

data ShowDict a = Show a = MkShowDict

Then the theorem translation is a bit more straightforward:

data P = forall a. MkP (ShowDict a, a - Int)
type CPS_P r = (forall a. (ShowDict a, a - Int) - r) - r

-- theorem: P = forall r. CPS_P r

isoL :: P - forall r. CPS_P r
isoL (MkP x) k = k x

isoR :: (forall r. CPS_P r) - P
isoR k = k (\x - MkP x)

  -- ryan

On Sat, Aug 18, 2012 at 8:24 PM, wren ng thornton w...@freegeek.org wrote:

 On 8/17/12 12:54 AM, Alexander Solla wrote:

 On Thu, Aug 16, 2012 at 8:07 PM, wren ng thornton w...@freegeek.org
 wrote:

 Though bear in mind we're discussing second-order quantification here,
 not
 first-order.


 Can you expand on what you mean here?  I don't see two kinds of
 quantification in the type language (at least, reflexively, in the context
 of what we're discussing).  In particular, I don't see how to quantify
 over
 predicates for (or sets of, via the extensions of the predicates) types.

 Is Haskell's 'forall' doing double duty?


 Nope, it's the forall of mathematics doing double duty :)

 Whenever doing quantification, there's always some domain being quantified
 over, though all too often that domain is left implicit; whence lots of
 confusion over the years. And, of course, there's the scope of the
 quantification, and the entire expression. For example, consider the
 expression:

 forall a. P(a)

 The three important collections to bear in mind are:
 (1) the collection of things a ranges over
 (2) the collection of things P(a) belongs to
 (3) the collection of things forall a. P(a) belongs to

 So if we think of P as a function from (1) to (2), and name the space of
 such functions (1-2), then we can think of the quantifier as a function
 from (1-2) to (3).


 When you talk to logicians about quantifiers, the first thing they think
 of is so-called first-order quantification. That is, given the above
 expression, they think that the thing being quantified over is a collection
 of individuals who live outside of logic itself[1]. For example, we could
 be quantifying over the natural numbers, or over the kinds of animals in
 the world, or any other extra-logical group of entities.

 In Haskell, when we see the above expression we think that the thing being
 quantified over is some collection of types[2]. But, that means when we
 think of P as a function it's taking types and returning types! So the
 thing you're quantifying over and the thing you're constructing are from
 the same domain[3]! This gets logicians flustered and so they call it
 second-order (or more generally, higher-order) quantification. If you
 assert the primacy of first-order logic, it makes sense right? In the
 first-order case we're quantifying over individuals; in the second-order
 case we're quantifying over collections of individuals; so third-order
 would be quantifying over collections of collections of individuals; and on
 up to higher orders.


 Personally, I find the names first-order and second-order rather
 dubious--- though the distinction is a critical one to make. Part of the
 reason for its dubiousness can be seen when you look at PTSes which make
 explicit that (1), (2), and (3) above can each be the same or different in
 all combinations. First-order quantification is the sort of thing you get
 from Pi/Sigma types in dependently typed languages like LF; second-order
 quantification is the sort 

Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-15 Thread Ryan Ingram
In classical logic  A - B is the equivalent to ~A v B
(with ~ = not and v = or)

So

(forall a. P(a)) - Q
{implication = not-or}
~(forall a. P(a)) v Q
{forall a. X is equivalent to there does not exist a such that X doesn't
hold}
~(~exists a. ~P(a)) v Q
{double negation elimination}
(exists a. ~P(a)) v Q
{a is not free in Q}
exists a. (~P(a) v Q)
{implication = not-or}
exists a. (P(a) - Q)

These steps are all equivalencies, valid in both directions.

On Wed, Aug 15, 2012 at 9:32 AM, David Feuer david.fe...@gmail.com wrote:

 On Aug 15, 2012 3:21 AM, wren ng thornton w...@freegeek.org wrote:
  It's even easier than that.
 
  (forall a. P(a)) - Q  =  exists a. (P(a) - Q)
 
  Where P and Q are metatheoretic/schematic variables. This is just the
 usual thing about antecedents being in a negative position, and thus
 flipping as you move into/out of that position.

 Most of this conversation is going over my head. I can certainly see how
 exists a. (P(a)-Q) implies that (forall a. P(a))-Q. The opposite
 certainly doesn't hold in classical logic. What sort of logic are you folks
 working in?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fixity declaration extension

2012-08-14 Thread Ryan Ingram
On Tue, Aug 14, 2012 at 1:04 AM, Евгений Пермяков permea...@gmail.comwrote:

  Your idea looks _much_ better from code clarity point of view, but it's
 unclear to me, how to deal with it internally and in error messages. I'm
 not a compiler guy, though.


How to deal with it internally: It's pretty easy, actually.  The hardest
part is implementing an extensible partial order; once you have that and
you can use it to drive comparisons, parsing is not hard.

Basically, at each step when you read an operator token, you need to decide
to shift, that is, put it onto a stack of operations, reduce, that is,
apply the operator at the top of the stack (leaving the current token to
check again at the next step), or give a parse error.  The rules for
deciding which of those to do are pretty simple:

Given X, the operator at the top of the stack, and Y, the operator you just
read:

(1) Compare the precedence of X and Y.  If they are incomparable, it's a
parse error.
(2) If Y is higher precedence than X, shift.
(3) If Y is lower precedence than X, reduce.

(At this point, we know X and Y have equal precedence)

(4) If X or Y is non-associative, it's a parse error.
(5) If X and Y don't have the same associativity, it's a parse error.

(At this point we know X and Y have the same associativity)

(6) If X and Y are left associative, reduce.
(7) Otherwise they are both right associative, shift.

So, for example, reading the expression

x * y + x + g w $ z

Start with stack [empty x].

The empty operator has lower precedence than anything else (that is, it
will never be reduced).  When you finish reading an expression, reduce
until the empty operator is the only thing on the stack and return its
expression.

* is higher precedence than empty, shift.  [empty x, * y]
+ is lower precedence than *, reduce. [empty (x*y)]
+ is higher precedence than empty, shift. [empty (x*y), + x]
+ is the same precedence as +, and is left associative, reduce.  [empty
((x*y)+x)]
+ is higher precedence than empty, shift [empty ((x*y)+x), + g]
function application is higher precedence than +, shift. [empty ((x*y)+x),
+ g, APP w]
$ is lower precedence than function application, reduce. [empty ((x*y)+x),
+ (g w)]
$ is lower precedence than +, reduce. [empty (((x*y)+x) + (g w))]
$ is higher precedence than empty, shift. [empty (((x*y)+x) + (g w)), $ z]
Done, but the stack isn't empty.  Reduce.  [empty x*y)+x) + (g w)) $ z)]
Done, and the stack is empty.
Result: x*y)+x) + (g w)) $ z)

Each operator is shifted exactly once and reduced exactly once, so this
algorithm runs in a number of steps linear in the expression size.
Parentheses start a new sub-stack when parsing the 'thing to apply the
operator to' part of the expression.

Something like this:

simple_exp :: Parser Exp
simple_exp =
(ExpId $ identifier) | (ExpLit $ literal) | (lparen *
expression * rparen)

expression :: Parser Exp
expression = do
first - simple_exp
binops [ (Empty, first) ]

reduceAll [ (Empty, e) ] = return e
reduceAll ((op1, e1) : (op2, e2) : rest) = reduceAll ((op2, (ExpOperator
op1 e1 e2)) : rest)

binops :: Stack - Parser Exp
binops s = handle_binop | handle_application | reduceAll s where
handle_binop = do
op - operator
rhs - simple_exp
reduce_until_shift op rhs s
handle_application = do
rhs - simple_exp
reduce_until_shift FunctionApplication rhs s

reduce_until_shift implements the algorithm above until it eventually
shifts the operator onto the stack.
identifier parses an identifier, operator parses an operator, literal
parses a literal (like 3 or hello)
lparen and rparen parse left and right parentheses.

I haven't considered how difficult it would be to expand this algorithm to
support unary or more-than-binary operators; I suspect it's not
ridiculously difficult, but I don't know.  Haskell's support for both of
those is pretty weak, however; even the lip service paid to unary - is a
source of many problems in parsing Haskell.

Worse, it does not allow to set up fixity relative to operator that is not
 in scope and it will create unnecessary intermodule dependencies.  One
 should fall back to numeric fixities for such cases, if it is needed.


You can get numeric fixity by just declaring precedence equal to some
prelude operator with the desired fixity; this will likely be the common
case.

I would expect modules to declare locally relative fixities between
operators imported from different modules if and only if it was relevant to
that module's implementation.  In most cases I expect the non-ordering to
be resolved by adding parentheses, not by declaring additional precedence
directives; for example, even though (a == b == c) would be a parse error
due to == being non-associative, both ((a == b) == c) and (a == (b == c))
are not.  The same method of 'just add parentheses where you mean it' fixes
any parse error due to incomparable precedences.

  -- ryan
___

Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-14 Thread Ryan Ingram
On Mon, Aug 13, 2012 at 6:53 PM, Alexander Solla alex.so...@gmail.comwrote:

 In a classical logic, the duality is expressed by !E! = A, and !A! = E,
 where E and A are backwards/upsidedown and ! represents negation.  In
 particular, for a proposition P,

 Ex Px = !Ax! Px (not all x's are not P)
 and
 Ax Px = !Ex! Px (there does not exist an x which is not P)

 Negation is relatively tricky to represent in a constructive logic --
 hence the use of functions/implications and bottoms/contradictions.  The
 type ConcreteType - b represents the negation of ConcreteType, because it
 shows that ConcreteType implies the contradiction.

 Put these ideas together, and you will recover the duality as expressed
 earlier.


I'd been looking for a good explanation of how to get from Ex Px to !Ax! Px
in constructive logic, and this is basically it.  What is said here is
actually a slightly different statement, which is what had me confused!

If you do the naive encoding, you get something like

-- This is my favorite representation of Contradiction in Haskell, since
-- it has reductio ad absurdum encoded directly in the type
-- and only requires Rank2Types.
newtype Contradiction = Bottom { absurd :: forall a. a }
-- absurd :: forall a. Contradiction - a

type Not a = a - Contradiction
newtype NotC (c :: * - Constraint) = MkNotC { unNotC :: forall a. c a =
Not a }
type UselessExists (c :: * - Constraint) = Not (NotC c)
-- here I'm using ConstraintKinds as, in a sense, the 'most generic' way of
specifying a type-level predicate,
-- at least in bleeding edge Haskell.  It's trivial to make a less generic
version for any particular constraint
-- you care about without going quite so overboard on type system
extensions :)
-- i.e.
-- newtype NoShow = MkNoShow {  unNoShow :: forall a. Show a = Not a }
-- type UselessExistsShow = Not NoShow

-- A simple example: there is a type that is equal to Bool:
silly :: UselessExists ((~) Bool)
silly (MkNotC k) = k True

-- need silly :: NotC ((~) Bool) - Contradiction
-- after pattern matching on MkNotC
-- k :: forall a. (a ~ Bool) = a - Contradiction
-- i.e. k :: Bool - Contradiction
-- therefore, k True :: Contradiction.

This type is useless, however; NotC c isn't usefully inhabited at any
reasonable c -- there's no way to actually call it!

The magic comes when you unify the 'return type' from the two Nots
instead of using bottoms as a catch-all... I guess this is the CPS
translation of negation?

type NotR r a = a - r
newtype NotRC r (c :: * - Constraint) = MkNotRC { unNotRC :: forall a. c a
= NotR r a }
-- MkNotRC :: forall r c. (forall a. c a = a - r) - NotRC r

type ExistsR r (c :: * - Constraint) = NotR r (NotRC r c)
-- ~= c a = (a - r) - r

newtype Exists (c :: * - Constraint) = MkExists { unExists :: forall r.
ExistsR r c }
-- MkExists :: forall c. (forall r. NotR r (NotRC r c)) - Exists c
-- ~= forall c. (forall r. c a = (a - r) - r) - Exists c

-- A simple example: there is a type that is equal to Bool:
silly2 :: Exists ((~) Bool)
silly2 = MkExists (\(MkNotRC k) - k False)

This version allows you to specify the type of 'absurd' result you
want, and use that to let the witness of existence escape out via whatever
interface the provided constraint gives.

caseExists :: (forall a. c a = a - r)  - Exists c - r
caseExists k (MkExists f) = f (MkNotRC k)

main = putStrLn $ caseExists show silly2
-- should print False
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fixity declaration extension

2012-08-13 Thread Ryan Ingram
When I was implementing a toy functional languages compiler I did away with
precedence declarations by number and instead allowed the programmer to
specify a partial order on declarations; this seems to be a much cleaner
solution and avoids arbitrary precedences between otherwise unrelated
operators defined in different modules.

You could write statements like

-- define + and - to have the same precedence
infixl + -

-- define * to have higher precedence than +
infixl * above +

-- define / to have the same precedence as *
infixr / equal *

-- $ is right-associative
infixr $
-- you can also separate precedence from fixity declaration
precedence $ below +

-- function application has higher precedence than all operators by
default, but you can override that
infixl . above APP

-- == is non-associative
infix ==

Here's some parses with this system:

a + b - c   =   (a+b)-c
f.x.y z == g w  = (((f.x).y) z) == (g w)
a == b == c  = parse error (non-associative operator)
a * b / c = parse error (left-associative/right-associative operators with
same precedence)
a == b $ c = parse error (no ordering known between == and $)
a $ b + c = a $ (b+c)

I think this is a much cleaner way to solve the problem and I hope
something like it makes it into a future version of Haskell.

  -- ryan

On Sun, Aug 12, 2012 at 11:46 AM, Евгений Пермяков permea...@gmail.comwrote:

  fixity declaration has form *infix(l|r)? [Digit]* in haskell. I'm pretty
 sure, that this is not enough for complicated cases. Ideally, fixity
 declarations should have form *infix(l|r)? [Digit](\.(+|-)[Digit])** ,
 with implied infinitely long repeated (.0) tail. This will allow fine
 tuning of operator priorities and much easier priority selection. For
 example, it may be assumed, that bit operations like (..) operator have
 hightest priority and have priorities like 9.0.1 or 9.0.2, anti-lisps like
 ($) have lowest priority like 0.0.1, control operators have base priority
 1.* and logic operations like () have priority of 2.* and it will be
 possibly to add new operators between or above all (for example) control
 operators without moving fixity of other ones.

 Agda2 language supports wide priority range, but still without 'tails' to
 my knowledge. Is there any haskell-influenced language or experimental
 syntactic extension that address the issue?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: 'let' keyword optional in do notation?

2012-08-13 Thread Ryan Ingram
 But it would be in line with - bindings in the do notation, so maybe it
wouldn't feel so wrong.

I was about to post this exact example.

do
  x - return 1
  x - return x
  return x

seems to work just fine (the answer is 1).  I'd even be ok with =-in-do
being non-recursive like -

  -- ryan

On Thu, Aug 9, 2012 at 1:35 AM, Tillmann Rendel 
ren...@informatik.uni-marburg.de wrote:

 Hi,


 Martijn Schrage wrote:

 Would expanding each let-less binding to a separate let feel more
 sound to you?

  That was actually my first idea, but then two declarations at the same
 level will not be in the same binding group, so

 do x = y
y = 1

 would not compile. This would create a difference with all the other
 places where bindings may appear.


 But it would be in line with - bindings in the do notation, so maybe it
 wouldn't feel so wrong.

   Tillmann


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-13 Thread Ryan Ingram
On Mon, Aug 13, 2012 at 12:30 PM, Jay Sulzberger j...@panix.com wrote:

 Does Haskell have a word for existential type declaration?  I
 have the impression, and this must be wrong, that forall does
 double duty, that is, it declares a for all in some sense like
 the usual for all of the Lower Predicate Calculus, perhaps the
 for all of system F, or something near it.


It's using the forall/exists duality:
exsts a. P(a)  =  forall r. (forall a. P(a) - r) - r

For example:
(exists a. Show a = a) = forall r. (forall a. Show a = a - r) - r

This also works when you look at the type of a constructor:

data Showable = forall a. Show a = MkShowable a
-- MkShowable :: forall a. Show a = a - Showable

caseShowable :: forall r. (forall a. Show a = a - r) - Showable - r
caseShowable k (MkShowable x) = k x

testData :: Showable
testData = MkShowable (1 :: Int)

useData :: Int
useData = case testData of (MkShowable x) - length (show x)

useData2 :: Int
useData2 = caseShowable (length . show) testData

Haskell doesn't currently have first class existentials; you need to wrap
them in an existential type like this.  Using 'case' to pattern match on
the constructor unwraps the existential and makes any packed-up constraints
available to the right-hand-side of the case.

An example of existentials without using constraints directly:

data Stream a = forall s. MkStream s (s - Maybe (a,s))

viewStream :: Stream a - Maybe (a, Stream a)
viewStream (MkStream s k) = case k s of
Nothing - Nothing
Just (a, s') - Just (a, MkStream s' k)

takeStream :: Int - Stream a - [a]
takeStream 0 _ = []
takeStream n s = case viewStream s of
Nothing - []
Just (a, s') - a : takeStream (n-1) s'

fibStream :: Stream Integer
fibStream = Stream (0,1) (\(x,y) - Just (x, (y, x+y)))

Here the 'constraint' made visible by pattern matching on MkStream (in
viewStream) is that the s type stored in the stream matches the s type
taken and returned by the 'get next element' function.  This allows the
construction of another stream with the same function but a new state --
MkStream s' k.

It also allows the stream function in fibStream to be non-recursive which
greatly aids the GHC optimizer in certain situations.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What Haskell Records Need

2012-08-03 Thread Ryan Ingram
On Fri, Aug 3, 2012 at 10:11 AM, Jonathan Geddes
geddes.jonat...@gmail.comwrote:

 The nice part about the SEC functions is that
 they compose as regular functions. Lenses are
 super powerful in that they form a category.
 Unfortunately using categories other than
 functions feels a tad unwieldy because you
 have to hide something from prelude and then
 import Category. (A bit like exceptions,
 currently).


FWIW this is also true for van Laarhoven lenses[1]

type FTLens a b = forall f. Functor f = (b - f b) - (a - f a)

newtype Const a b = Const { unConst :: a } deriving Functor

get :: FTLens a b - a - b
get ft = unConst . ft Const

{-
ft :: forall f. (b - f b) - (a - f a)
Const :: forall x. b - Const b x
ft Const :: a - Const b a
-}

newtype Id a = Id { unId :: a } deriving Functor

set :: FTLens a b - b - a - a
set ft b = unId . ft (\_ - Id b)

modify :: FTLens a b - (b - b) - a - a
modify ft k = unId . ft (Id . k)

-- example
fstLens :: FTLens (a,b) a
fstLens aToFa (a,b) = (,b) $ aToFa a

-- and you get
compose :: FTLens b c - FTLens a b - FTLens a c
compose = (.)

identity :: FTLens a a
identity = id





 If you like the look of set with lenses,
 you could define a helper function to use
 with SEC updaters.

 set :: ((b - a) - c) - a - c
 set sec = sec . const
 
 --and then use it like so:
 setPersonsSalary :: Salary - Person - Person
 setPersonsSalary salary = set personsSalary' salary

 With it you can use an updater as a setter.
 I'd like to reiterate one of finer points of
 the original proposal.

 The compiler could disallow using old-style
 update syntax for fields whose SEC update
 function is not in scope, giving us
 fine-grained control over access and update.
 On the other hand we currently have to create
 new functions to achieve this (exporting the
 getter means exporting the ability to update
 [using update syntax] as well, currently).

 And now back to lenses:

 it is really convenient how lenses let you compose the getter
 and setter together.

 I don't recall too many cases where having the
 getter and setter and modifier all in one
 place was terribly useful. Could anyone give
 me an example? But again, where that is
 useful, a lens can be created from a getter
 and a SEC updater.

 Thoughts?

 --Jonathan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What Haskell Records Need

2012-08-03 Thread Ryan Ingram
Oops, forgot my references

[1] Original post:
http://www.twanvl.nl/blog/haskell/cps-functional-references
[2] polymorphic update support: http://r6.ca/blog/20120623T104901Z.html
[3] another post about these:
http://comonad.com/reader/2012/mirrored-lenses/

On Fri, Aug 3, 2012 at 1:53 PM, Ryan Ingram ryani.s...@gmail.com wrote:



 On Fri, Aug 3, 2012 at 10:11 AM, Jonathan Geddes 
 geddes.jonat...@gmail.com wrote:

 The nice part about the SEC functions is that
 they compose as regular functions. Lenses are
 super powerful in that they form a category.
 Unfortunately using categories other than
 functions feels a tad unwieldy because you
 have to hide something from prelude and then
 import Category. (A bit like exceptions,
 currently).


 FWIW this is also true for van Laarhoven lenses[1]

 type FTLens a b = forall f. Functor f = (b - f b) - (a - f a)

 newtype Const a b = Const { unConst :: a } deriving Functor

 get :: FTLens a b - a - b
 get ft = unConst . ft Const

 {-
 ft :: forall f. (b - f b) - (a - f a)
 Const :: forall x. b - Const b x
 ft Const :: a - Const b a
 -}

 newtype Id a = Id { unId :: a } deriving Functor

 set :: FTLens a b - b - a - a
 set ft b = unId . ft (\_ - Id b)

 modify :: FTLens a b - (b - b) - a - a
 modify ft k = unId . ft (Id . k)

 -- example
 fstLens :: FTLens (a,b) a
 fstLens aToFa (a,b) = (,b) $ aToFa a

 -- and you get
 compose :: FTLens b c - FTLens a b - FTLens a c
 compose = (.)

 identity :: FTLens a a
 identity = id





 If you like the look of set with lenses,
 you could define a helper function to use
 with SEC updaters.

 set :: ((b - a) - c) - a - c
 set sec = sec . const
 
 --and then use it like so:
 setPersonsSalary :: Salary - Person - Person
 setPersonsSalary salary = set personsSalary' salary

 With it you can use an updater as a setter.
 I'd like to reiterate one of finer points of
 the original proposal.

 The compiler could disallow using old-style
 update syntax for fields whose SEC update
 function is not in scope, giving us
 fine-grained control over access and update.
 On the other hand we currently have to create
 new functions to achieve this (exporting the
 getter means exporting the ability to update
 [using update syntax] as well, currently).

 And now back to lenses:

 it is really convenient how lenses let you compose the getter
 and setter together.

 I don't recall too many cases where having the
 getter and setter and modifier all in one
 place was terribly useful. Could anyone give
 me an example? But again, where that is
 useful, a lens can be created from a getter
 and a SEC updater.

 Thoughts?

 --Jonathan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting numeric overflows

2012-07-31 Thread Ryan Ingram
Sure, but it's easy to roll your own from those primitives:

{-# LANGUAGE MagicHash, UnboxedTuples #-}
import GHC.Exts

addCarry :: Int - Int - (Int, Bool)
addCarry (I# x) (I# y) = case addIntC# x y of
 (# s, c #) - case c of
 0# - (I# s, False)
 _ - (I# s, True)

or something along those lines.

  -- ryan

On Mon, Jul 30, 2012 at 1:43 PM, Евгений Пермяков permea...@gmail.comwrote:

 On 07/31/2012 12:04 AM, Artyom Kazak wrote:

 Евгений Пермяков permea...@gmail.com писал в своём письме Mon, 30 Jul
 2012 09:47:48 +0300:

  Can someone tell me if there are any primitives, that used to detect
 machine type overflows, in ghc haskell ? I perfectly understand, that I can
 build something based on preconditioning of variables, but this will kill
 any performance, if needed.


 In GHC.Prim -- primitives addIntC# and subIntC#:

  addIntC# :: Int# - Int# - (#Int#, Int##)
 Add with carry. First member of result is (wrapped) sum; second member
 is 0 iff no overflow occured.


  subIntC# :: Int# - Int# - (#Int#, Int##)
 Subtract with carry. First member of result is (wrapped) difference;
 second member is 0 iff no overflow occured.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

 Still no way to detect overflow in *.

 Strangely enough, I found some relevant descriptions in *.pp in dev
 branch, so I expect them in 7.6.1. They applies to native-size Word and Int
 only.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting numeric overflows

2012-07-31 Thread Ryan Ingram
Actually, looking at the docs, I'm not sure if case expressions work on
unboxed ints; you may need

addCarry (I# x) (I# y) = case addIntC# x y of (# s, c #) - (I# s, c /=# 0#)

which is somewhat simpler anyways.

  -- ryan

On Tue, Jul 31, 2012 at 1:56 AM, Ryan Ingram ryani.s...@gmail.com wrote:

 Sure, but it's easy to roll your own from those primitives:

 {-# LANGUAGE MagicHash, UnboxedTuples #-}
 import GHC.Exts

 addCarry :: Int - Int - (Int, Bool)
 addCarry (I# x) (I# y) = case addIntC# x y of
  (# s, c #) - case c of
  0# - (I# s, False)
  _ - (I# s, True)

 or something along those lines.

   -- ryan


 On Mon, Jul 30, 2012 at 1:43 PM, Евгений Пермяков permea...@gmail.comwrote:

 On 07/31/2012 12:04 AM, Artyom Kazak wrote:

 Евгений Пермяков permea...@gmail.com писал в своём письме Mon, 30 Jul
 2012 09:47:48 +0300:

  Can someone tell me if there are any primitives, that used to detect
 machine type overflows, in ghc haskell ? I perfectly understand, that I can
 build something based on preconditioning of variables, but this will kill
 any performance, if needed.


 In GHC.Prim -- primitives addIntC# and subIntC#:

  addIntC# :: Int# - Int# - (#Int#, Int##)
 Add with carry. First member of result is (wrapped) sum; second member
 is 0 iff no overflow occured.


  subIntC# :: Int# - Int# - (#Int#, Int##)
 Subtract with carry. First member of result is (wrapped) difference;
 second member is 0 iff no overflow occured.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

 Still no way to detect overflow in *.

 Strangely enough, I found some relevant descriptions in *.pp in dev
 branch, so I expect them in 7.6.1. They applies to native-size Word and Int
 only.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why does a foldr not have a space leak effect?

2012-07-30 Thread Ryan Ingram
The difference is that foldl *must* produce the entire list of thunks, even
if f is lazy in its first argument.

There's no foldl that can perform better given a sufficiently-lazy f; given

head = foldr go fail where
go x y = x
fail = error head: empty list

head [a,b,c,d]
= foldr go fail [a,b,c,d]
= go a (foldr go fail [b,c,d])
= a

you might think you can define

last = foldl go fail where
go x y = y
fail = error last: empty list

but this fails to be sufficiently lazy:

last [a,b,c,d]
= foldl go fail [a,b,c,d]
= foldl go (go fail a) [b,c,d]
= foldl go (go (go fail a) b) [c,d]
= foldl go (go (go (go fail a) b) c) [d]
= foldl go (go (go (go (go fail a) b) c) d) []
= go (go (go (go fail a) b) c) d
= d

which allocates lots of extra space for thunks, which may even take more
memory than the original list.

Whereas if last uses foldl':

last [a,b,c,d]
= foldl' go fail [a,b,c,d]
= let z = go fail a in seq z $ foldl' go z [b,c,d]
= foldl' go a [b,c,d]
= let z = go a b in seq z $ foldl' go z [c,d]
= foldl' go b [c,d]
= ...
= let z = go c d in seq z $ foldl' go z []
= foldl' go d []
= d

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Polyvariadic composition

2012-07-30 Thread Ryan Ingram
My completely off-the-cuff guess is that
a a b b
isn't considered more or less specific than
(x - a) ar (x - b) br
since they both apply some constraint on the types.  For example, it's not
immediately clear that the first instance can't be used for (x - a) (x -
a) (x - b) (x - b)

Whereas when you say
a ar b br
the type
(x - a) ar (x - b) br
is strictly more specific, so the overlapping instance can be chosen.

Remember instance selection is done entirely via the instance head, so
instance X a a
is not the same as
instance (a ~ b) = X a b

The first case supplies an instance for any two equal types, and the second
case supplies an instance for *any two types*, then throws an error if the
compiler can't prove that the two types are equal.

For example, without overlapping instances, you can write

class X a b where foo :: a - b

instance X a a where foo = id
instance X Int Bool where foo = (== 0)

But if instead you specify
instance (a ~ b) = X a b where foo = id
you can't specify the Int Bool instance without overlap.

  -- ryan

On Mon, Jul 30, 2012 at 12:32 PM, Artyom Kazak artyom.ka...@gmail.comwrote:

 Hello,

 I have accidentally written my version of polyvariadic composition
 combinator, `mcomp`. It differs from Oleg’s version (
 http://okmij.org/ftp/Haskell/**polyvariadic.html#polyvar-comphttp://okmij.org/ftp/Haskell/polyvariadic.html#polyvar-comp)
  in three aspects: a) it is simpler, b) it works without enumerating basic
 cases (all existing types, in other words), and c) it needs more type
 extensions.

  {-# LANGUAGE
   MultiParamTypeClasses
 , FunctionalDependencies
 , FlexibleInstances
 , UndecidableInstances
 , TypeFamilies  , OverlappingInstances
   #-}

 class Mcomp a ar b br | a br - b where
   mcomp :: a - (ar - br) - b

 instance (a ~ ar, b ~ br) = Mcomp a ar b br where
   mcomp a f = f a

 instance (Mcomp a ar b br) = Mcomp (x - a) ar (x - b) br where
   mcomp a f = \x - mcomp (a x) f


 My question is: why doesn’t it work when I replace

 instance (a ~ ar, b ~ br) = Mcomp a ar b br

 with

 instance Mcomp a a b b

 ? I thought that equal letters mean equal types…

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-07-30 Thread Ryan Ingram
With apologies to Jim Coplien :)

I've been seeing this pattern in a surprising number of instance
definitions lately:

instance (a ~ ar, b ~ br) = Mcomp a ar b br [1]
instance (b ~ c, CanFilterFunc b a) = CanFilter (b - c) a [2]

The trick is that since instance selection is done entirely on the instance
head, these instances are strictly more general than the ones they replace:

instance Mcomp a a b b
instance CanFilterFunc b = CanFilter (b - b) a

The compiler has to do a lot more work to select these instances; it has to
prove that the matching types actually match before it can select the
instance; if it can't, it won't select an instance, and instead will
complain about no instance CLASS Int a.  But with the CRIP, you help the
compiler--it chooses the general instance, and then gets a constraint to
solve.  The constraint forces the two types to unify, or else there is a
type error.

What I'm wondering is--are there many cases where you really want the
non-constraint-generating behavior?  It seems like (aside from contrived,
ahem, instances) whenever you have instance CLASS A B where A and B share
some type variables, that there aren't any valid instances of the same
class where they don't share the types in that way.  For example, I've
never really seen a class in practice with instances like

class Foo a b
instance Foo a a
instance Foo ConcreteTypeA ConcreteTypeB

Note that it's very difficult to use polymorphic types in the second
instance without risking overlap.

TL;DR: I, for one, welcome our new type equality constraint overlords.

  -- ryan

[1] http://permalink.gmane.org/gmane.comp.lang.haskell.cafe/99611
[2] http://www.yesodweb.com/blog/2012/07/classy-prelude
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] specifying using type class

2012-07-30 Thread Ryan Ingram
Generally the way this is done in Haskell is that the interface to the type
is specified in a typeclass (or, alternatively, in a module export list,
for concrete types), and the axioms are specified in a method to be tested
in some framework (i.e. QuickCheck, SmallCheck, SmartCheck) which can
automatically generate instances of your type and test that the axioms hold.

For example:

class QueueLike q where
empty :: q a
insert :: a - q a - q a
viewFirst :: q a - Maybe (a, q a)
size :: q a - Integer

-- can use a single proxy type if have kind polymorphism, but that's an
experimental feature right now
data Proxy2 (q :: * - *) = Proxy2
instance Arbitrary (Proxy2 q) where arbitrary = return Proxy2

prop_insertIncrementsSize :: forall q. QueueLike q = q () - Bool
prop_insertIncrementsSize q = size (insert () q) == size q + 1

prop_emptyQueueIsEmpty :: forall q. QueueLike q = Proxy2 q = Bool
prop_emptyQueueIsEmpty Proxy2 = maybe True (const False) $ view (empty :: q
())

Then you specialize these properties to your type and test them:

instance QueueLike [] where ...

ghci quickCheck (prop_insertIncrementsSize :: [()] - Bool)
Valid, passed 100 tests
or
Failed with: [(), (), ()]

QuickCheck randomly generates objects of your data structure and tests your
property against them.  While not as strong as a proof, programs with 100%
quickcheck coverage are *extremely* reliable.  SmartCheck is an extension
of QuickCheck that tries to reduce test cases to the minimum failing size.

SmallCheck does exhaustive testing on the properties for finite data
structures up to a particular size.  It's quite useful when you can prove
your algorithms 'generalize' after a particular point.

There aren't any libraries that I know of for dependent-type style program
proof for haskell; I'm not sure it's possible.  The systems I know of have
you program in a more strongly typed language (Coq/agda) and export Haskell
programs once they are proven safe.  Many of these rely on unsafeCoerce in
the Haskell code because they have proven stronger properties about the
types than Haskell can; I look at that code with some trepidation--I am not
sure what guarantees the compiler makes about unsafeCoerce.

  -- ryan

On Sun, Jul 22, 2012 at 7:19 AM, Patrick Browne patrick.bro...@dit.iewrote:

 {-
 Below is a *specification* of a queue.
 If possible I would like to write the equations in type class.
 Does the type class need two type variables?
 How do I represent the constructors?
 Can the equations be written in the type class rather than the instance?
 -}

 module QUEUE_SPEC where
 data Queue e   = New | Insert (Queue e) e deriving Show

 isEmpty :: Queue  e  - Bool
 isEmpty  New  = True
 isEmpty (Insert q e) = False

 first :: Queue  e  - e
 first (Insert q e) =  if (isEmpty q) then e else (first q)


 rest :: Queue  e  - Queue  e
 rest (Insert  q e ) = if (isEmpty q) then New  else (Insert (rest q) e)


 size :: Queue  e  - Int
 size New  = 0
 size (Insert q e) = succ (size q)

 {-
 some tests of above code
 size (Insert (Insert (Insert New 5) 6) 3)
 rest (Insert (Insert (Insert New 5) 6) 3)

 My first stab at a class
 class QUEUE_SPEC q e where
  new :: q e
  insert :: q e - q e
  isEmpty :: q  e  - Bool
  first :: q  e  - e
  rest :: q  e  - q e
  size :: q e  - Int

 -}


 Tá an teachtaireacht seo scanta ó thaobh ábhar agus víreas ag Seirbhís
 Scanta Ríomhphost de chuid Seirbhísí Faisnéise, ITBÁC agus meastar í a
 bheith slán. http://www.dit.ie
 This message has been scanned for content and viruses by the DIT
 Information Services E-Mail Scanning Service, and is believed to be clean.
 http://www.dit.ie
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What does unpacking an MVar really mean?

2012-07-30 Thread Ryan Ingram
I'm not sure I totally understand your question about 'unpacking' an MVar,
but I'm going to assume you mean data structures that use the {-# UNPACK
#-} pragma, like in Control.Concurrent.Future [1] and
Control.Concurrent.NamedLock [2].

Here is how MVar is defined in GHC [3]:
data MVar a = MVar (MVar# RealWorld a)

A MVar# s a is an unboxed pointer to a structure understood by the GHC
runtime.

So yes, you can imagine a MVar as a pointer-to-pointer.  The structure it
points at likely has another pointer to the embedded boxed a, so it may
even be pointer-to-pointer-to-pointer.

The MVar data structure exists to allow laziness; for example

   let x = unsafePerformIO (newMVar ()) in ()

is likely to not allocate an MVar#, just a thunk that would create an MVar
if it was evaluated.  Unboxed objects (represented by convetion with # in
GHC), on the other hand, are strict, so if you have an MVar# RealWorld (),
you know it points to a valid MVar#.

From [2] we have
data NLItem = NLItem {-# UNPACK #-} !Int
 {-# UNPACK #-} !(MVar ())

All the {-# UNPACK #-} pragma does is embed the contents of a strict
single-constructor data declaration *directly* into the structure
containing it; it's like you declared NLItem as such:

data NLItem = NLItem Word# (MVar# RealWorld ())

except that if you call functions that want an Int or MVar thunk, the
compiler will automatically re-box them in a new I#/MVar constructor.

Many copies of pointers to the same MVar# may exist; they are all
'identical' MVars; equality is defined as such:
instance Eq (MVar a) where
(MVar mvar1#) == (MVar mvar2#) = sameMVar# mvar1# mvar2#

where sameMVar# is a primitive that is probably just raw pointer equality.

Because of this, boxed MVars can be garbage collected without necessarily
garbage-collecting the MVar# it holds, if a live reference to that MVar#
still exists elsewhere.

  -- ryan

[1]
http://hackage.haskell.org/packages/archive/future/2.0.0/doc/html/src/Control-Concurrent-Future.html
[2]
http://hackage.haskell.org/packages/archive/named-lock/0.1/doc/html/src/Control-Concurrent-NamedLock.html
[3]
http://www.haskell.org/ghc/docs/7.4.1/html/libraries/base/src/GHC-MVar.html#MVar

On Mon, Jul 30, 2012 at 9:25 PM, Leon Smith leon.p.sm...@gmail.com wrote:

 I admit I don't know exactly how MVars are implemented,  but given that
 they can be aliased and have indefinite extent,   I would think that they
 look something vaguely like a  cdatatype ** var,  basically a pointer to an
 MVar (which is itself a pointer,  modulo some other things such as a thread
 queue.)

 And,  I would think that unpacking such an structure would basically be
 eliminating one layer of indirection,  so it would then look vague like a
 cdatatype * var.But again,  given aliasing and indefinite extent, this
 would seem to be a difficult thing to do.

 Actually this isn't too difficult if an MVar only exists in a single
 unpacked structure:   other references to the MVar can simply be pointers
 into the structure.   But the case where an MVar is unpacked into two
 different structures suggests that,  at least in some cases,  an unpacked
 MVar is still a cdatatype ** var;

 So, is my understanding more or less correct?  Does anybody have a good,
 succinct explanation of how MVars are implemented,  and how they are
 unpacked?

 One final question,   assuming that unpacking an MVar really does
 eliminate a layer of indirection,  and that other references to that MVar
 are simply pointers into a larger structure,   what happens to that larger
 structure when there are no more references to it (but still some
 references to the MVar?)Given the complications that must arise out of
 a doubly unpacked MVar,  I'm going to guess that the larger structure
 does get garbage collected in this case,  and that the MVar becomes
 dislodged from this structure.   Would that MVar then be placed directly
 inside another unpacked reference, if one is available?

 Best,
 Leon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to define a Monad instance

2012-07-30 Thread Ryan Ingram
To take this a step further, if what you really want is the syntax sugar
for do-notation (and I understand that, I love sweet, sweet syntactical
sugar), you are probably implementing a Writer monad over some monoid.

Here's two data structures that can encode this type;

data Replacer1 k a = Replacer1 (k - Maybe k) a
data Replacer2 k a = Replacer2 [(k,k)] a

instance Monad Replacer1 where
return x = Replacer1 (\_ - Nothing) x
Replacer1 ka a = f = result where
Replacer1 kb b = f a
result = Replacer1 (\x - ka x `mplus` kb x) b

(!) :: Eq k = k - k - Replacer1 k ()
x ! y = Replacer1 (\k - if k == x then Just y else Nothing) ()

replace1 :: Replacer1 k () - [k] - [k]-- look ma, no Eq requirement!
replace1 (Replacer k ()) = map (\x - fromMaybe x $ k x) -- from Data.Maybe

table1 :: Replacer1 Char ()
table1 = do
'a' ! 'b'
'A' ! 'B'

test = replace1 table1 All I want

-- Exercise: what changes if we switch ka and kb in the result of (=)?
When does it matter?

-- Exercises for you to implement:
instance Monad Replacer2 k where
replacer :: Eq k = Replacer2 k - [k] - [k]
($) :: k - k - Replacer2 k

-- Exercise: Lets make use of the fact that we're a monad!
--
-- What if the operator ! had a different type?
-- (!) :: Eq k = k - k - Replacer k Integer
-- which returns the count of replacements done.
--
-- table3 = do
-- count - 'a' ! 'b'
-- when (count  3) ('A' ! 'B')
-- return ()
--
-- Do any of the data structures I've given work?  Why or why not?
-- Can you come up with a way to implement this?

  -- ryan

On Sat, Jul 28, 2012 at 10:07 AM, Steffen Schuldenzucker 
sschuldenzuc...@uni-bonn.de wrote:

 On 07/28/2012 03:35 PM, Thiago Negri wrote:
  [...]

  As Monads are used for sequencing, first thing I did was to define the
 following data type:

 data TableDefinition a = Match a a (TableDefinition a) | Restart


 So TableDefinition a is like [(a, a)].

  [...]

 

 So, to create a replacement table:

 table' :: TableDefinition Char
 table' =
  Match 'a' 'b'
  (Match 'A' 'B'
   Restart)

 It look like a Monad (for me), as I can sequence any number of
 replacement values:

 table'' :: TableDefinition Char
 table'' = Match 'a' 'c'
   (Match 'c' 'a'
   (Match 'b' 'e'
   (Match 'e' 'b'
Restart)))


 Yes, but monads aren't just about sequencing. I like to see a monad as a
 generalized computation (e.g. nondeterministic, involving IO, involving
 state etc). Therefore, you should ask yourself if TableDefinition can be
 seen as some kind of abstract computation. In particular, can you
 execute a computation and extract its result? as in

   do
 r - Match 'a' 'c' Restart
 if r == 'y' then Restart else Match 2 3 (Match 3 4 Restart)

 Doesn't immediately make sense to me. In particular think about the
 different possible result types of a TableDefinition computation.

 If all you want is sequencing, you might be looking for a Monoid instance
 instead, corresponding to the Monoid instance of [b], where b=(a,a) here.

   [...]

 

 I'd like to define the same data structure as:

 newTable :: TableDefinition Char
 newTable = do
  'a' :  'b'
  'A' :  'B'

 But I can't figure a way to define a Monad instance for that. :(


 The desugaring of the example looks like this:

   ('a' : 'b')  ('A' : 'B')

 Only () is used, but not (=) (i.e. results are always discarded). If
 this is the only case that makes sense, you're probably looking for a
 Monoid instead (see above)

 -- Steffen


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to define a Monad instance

2012-07-30 Thread Ryan Ingram
A couple typos:

instance Monad Replacer1 where
-
instance Monad (Replacer1 k) where

instance Monad Replacer2 k where
-
instance Monad (Replacer2 k) where

I haven't tested any of this code, so you may have to fix some minor type
errors.

On Mon, Jul 30, 2012 at 10:38 PM, Ryan Ingram ryani.s...@gmail.com wrote:

 To take this a step further, if what you really want is the syntax sugar
 for do-notation (and I understand that, I love sweet, sweet syntactical
 sugar), you are probably implementing a Writer monad over some monoid.

 Here's two data structures that can encode this type;

 data Replacer1 k a = Replacer1 (k - Maybe k) a
 data Replacer2 k a = Replacer2 [(k,k)] a

 instance Monad Replacer1 where
 return x = Replacer1 (\_ - Nothing) x
 Replacer1 ka a = f = result where
 Replacer1 kb b = f a
 result = Replacer1 (\x - ka x `mplus` kb x) b

 (!) :: Eq k = k - k - Replacer1 k ()
 x ! y = Replacer1 (\k - if k == x then Just y else Nothing) ()

 replace1 :: Replacer1 k () - [k] - [k]-- look ma, no Eq requirement!
 replace1 (Replacer k ()) = map (\x - fromMaybe x $ k x) -- from Data.Maybe

 table1 :: Replacer1 Char ()
 table1 = do
 'a' ! 'b'
 'A' ! 'B'

 test = replace1 table1 All I want

 -- Exercise: what changes if we switch ka and kb in the result of (=)?
 When does it matter?

 -- Exercises for you to implement:
 instance Monad Replacer2 k where
 replacer :: Eq k = Replacer2 k - [k] - [k]
 ($) :: k - k - Replacer2 k

 -- Exercise: Lets make use of the fact that we're a monad!
 --
 -- What if the operator ! had a different type?
 -- (!) :: Eq k = k - k - Replacer k Integer
 -- which returns the count of replacements done.
 --
 -- table3 = do
 -- count - 'a' ! 'b'
 -- when (count  3) ('A' ! 'B')
 -- return ()
 --
 -- Do any of the data structures I've given work?  Why or why not?
 -- Can you come up with a way to implement this?

   -- ryan


 On Sat, Jul 28, 2012 at 10:07 AM, Steffen Schuldenzucker 
 sschuldenzuc...@uni-bonn.de wrote:

 On 07/28/2012 03:35 PM, Thiago Negri wrote:
  [...]

  As Monads are used for sequencing, first thing I did was to define the
 following data type:

 data TableDefinition a = Match a a (TableDefinition a) | Restart


 So TableDefinition a is like [(a, a)].

  [...]

 

 So, to create a replacement table:

 table' :: TableDefinition Char
 table' =
  Match 'a' 'b'
  (Match 'A' 'B'
   Restart)

 It look like a Monad (for me), as I can sequence any number of
 replacement values:

 table'' :: TableDefinition Char
 table'' = Match 'a' 'c'
   (Match 'c' 'a'
   (Match 'b' 'e'
   (Match 'e' 'b'
Restart)))


 Yes, but monads aren't just about sequencing. I like to see a monad as a
 generalized computation (e.g. nondeterministic, involving IO, involving
 state etc). Therefore, you should ask yourself if TableDefinition can be
 seen as some kind of abstract computation. In particular, can you
 execute a computation and extract its result? as in

   do
 r - Match 'a' 'c' Restart
 if r == 'y' then Restart else Match 2 3 (Match 3 4 Restart)

 Doesn't immediately make sense to me. In particular think about the
 different possible result types of a TableDefinition computation.

 If all you want is sequencing, you might be looking for a Monoid instance
 instead, corresponding to the Monoid instance of [b], where b=(a,a) here.

   [...]

 

 I'd like to define the same data structure as:

 newTable :: TableDefinition Char
 newTable = do
  'a' :  'b'
  'A' :  'B'

 But I can't figure a way to define a Monad instance for that. :(


 The desugaring of the example looks like this:

   ('a' : 'b')  ('A' : 'B')

 Only () is used, but not (=) (i.e. results are always discarded). If
 this is the only case that makes sense, you're probably looking for a
 Monoid instead (see above)

 -- Steffen


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy producing a list in the strict ST monad

2012-06-19 Thread Ryan Ingram
It doesn't work like that by default, and here is why:

-- an infinite tree of values
data InfTree a = Branch a (InfTree a) (InfTree a)

buildTree :: Num a = STRef s a - ST s (InfTree a)
buildTree ref = do
n - readSTRef ref
writeSTRef ref $! (n+1)
left - buildTree ref
right - buildTree ref
return (Branch n left right)

makeTree :: Num a = ST s (InfTree a)
makeTree = do
ref - newSTRef 0
buildTree ref

-- should be referentially transparent, i.e. these two expressions should
be equivalent
pureInfTree1, pureInfTree2 :: InfTree Integer
pureInfTree1 = runST makeTree
pureInfTree2 = runST makeTree

element (Branch x _ _) = x
goLeft (Branch _ x _) = x
goRight (Branch _ _ x) = x

test :: IO ()
test = do
   let left1 = goLeft pureInfTree1
   let right1 = goRight pureInfTree1
   let left2 = goLeft pureInfTree2
   let right2 = goRight pureInfTree2
   evaluate (element left1)
   evaluate (element right1)
   evaluate (element right2)
   evaluate (element left2)
   print (element left1 == element left2)  -- should be True!

Right now this code diverges, because buildTree diverges.  If buildTree was
lazy, test would print False because of the order of evaluation.  You can
make buildTree lazy if you want:

import Control.Monad.ST.Unsafe

buildTree :: Num a = STRef s a - ST s (InfTree a)
buildTree ref = do
n - readSTRef ref
writeSTRef ref $! (n+1)
left - unsafeInterleaveST (buildTree ref)
right - unsafeInterleaveST (buildTree ref)
return (Branch n left right)

In order to safely use unsafeInterleaveST, you need to prove that none of
the references used by the computation passed to unsafeInterleaveST can be
used by any code after the unsafeInterleaveST; so this 'lazy' list
generation is safe:

buildList :: Num a = STRef s a - ST s [a]
buildList = do
ref - newSTRef 0
let loop =
n - readSTRef ref
writeSTRef ref $! (n+1)
rest - unsafeInterleaveST loop
return (n : rest)
loop

because we are guaranteed that the only reference to ref exists inside the
loop which uses it in a linear fashion.  So you may be able to get away
with it... but you have to make a proof manually that the compiler isn't
able to infer for you.

  -- ryan


On Sun, Jun 10, 2012 at 5:37 AM, Nicu Ionita nicu.ion...@acons.at wrote:

 Hi,

 I'm trying to produce a list in the strict ST monad. The documentation  of
 ST says that the monad is strict in the state, but not in the values. So I
 expect that, when returning a list, I get back only the Cons (with 2
 unevaluated thunks). Now, when I need the first element (head), this will
 be evaluated (with whatever actions are necessary in the ST universe) and
 the tail is again a Cons with unevaluated parts.

 Internally my list is stored in a vector, and the elements are generated
 phasewise, each phase generating 0 or more elements in the vector, and a
 fuction splitMove is driving this process (see code below). I would expect
 that the first phase triggers, generates some moves, then (after these are
 consumed from the list) the next phase triggers generating the next few
 moves and so on.

 But when I trace the phases (Debug.Trace.trace) I get all the trace
 messages in front of the first move:

 Moves for fen: rnbqkbnr/pp3ppp/4p3/2pp4/3P4/**2NQ4/PPP1/R1B1KBNR w
 After move generation...
 0 = 0 : next phase
 3 = 3 : next phase
 3 = 3 : next phase
 42 = 42 : next phase
 44 = 44 : next phase
 d4c5
 g1f3
 g1h3
 c3b1
 ...

 This seems not to be just an unhappy combination between trace and ST, as
 also the program without trace is beeing slower than the same implemented
 with plain lists, which is hard to believe (in many cases the move list is
 not consumed to the end).

 I wonder if my expectation is wrong, but I don't find a way to do this.
 Here is the (incomplete) code:

 produceList ... = runST $ do
ml - newMList ...
listMoves ml

 -- Transforms a move list to a list of moves - lazy
 listMoves :: MList s - ST s [Move]
 listMoves ml = do
sm - splitMove ml
case sm of
Just (m, ml') - do
rest - listMoves ml'
return $ m : rest
Nothing   - return []

 -- Split the first move from the move list and return it together with
 -- the new move list (without the first move). Return Nothing if there
 -- is no further move
 splitMove :: MList s - ST s (Maybe (Move, MList s))
 splitMove ml
| mlToMove ml = mlToGen ml = do
mml - trace trm $ nextPhase ml
case mml of
Nothing  - return Nothing
Just ml' - splitMove ml'
| otherwise = do
m - U.unsafeRead (mlVec ml) (mlToMove ml)
case mlCheck ml ml m of
Ok- return $ Just (m, ml1)
Skip  - splitMove ml1
Delay - splitMove ml1 { mlBads = m : mlBads ml }
where ml1 = ml { mlToMove = mlToMove ml + 1 }
  trm  = show (mlToMove ml) ++  =  ++ show (mlToGen ml) ++  :
 next phase

 __**_
 

Re: [Haskell-cafe] Arithmetic expressions with GADTs: parsing

2012-06-04 Thread Ryan Ingram
Another option is to reify the type so that you can get it back somehow.
Here's a few diffs to your file (I've attached the full code):

A new type:
data Typed f where
   TDouble :: f Double - Typed f
   TBool :: f Bool - Typed f

runT :: (f Double - a) - (f Bool - a) - Typed f - a
runT k _ (TDouble x) = k x
runT _ k (TBool x)   = k x

New version of pExpr that can parse both expression types, by tagging with
the type
-- pExpr = pArit | pBool | pEqual
pExpr = (TDouble $ pArit) | (TBool $ pBool) | (TDouble $
pEqual)

and now main:
main = do line - getLine
  case parse pExpr  line of
Left msg - putStrLn (show msg)
Right e - putStrLn (runT (show . eval) (show . eval) e)

What I'm doing here is reifying the possible types of top level expressions
and then providing a handler in main which works on all possible types.
There are other ways to do this (embed any expression in an existential,
for example), but this makes it really clear what is going on, and shows
the way forward for parsing a larger typed language.

  -- ryan

On Wed, May 2, 2012 at 6:08 AM, j.romi...@gmail.com wrote:

 On Wed, May 02, 2012 at 03:02:46PM +0300, Roman Cheplyaka wrote:
  * j.romi...@gmail.com j.romi...@gmail.com [2012-05-02 08:03:45-0300]
 [...]
  The alternatives given to | must be of the same type. In your case,
  one is Expr Double and one is Expr Bool.
 
  Inclusion of pBool in pFactor is probably a mistake — unless you're
  going to multiply booleans.

 You are right in the sense that I cannot mix Expr Bool and Expr Double
 in a (O op l r) expression.

 But the parser should be able to parse any form of expressions. So I
 rewrite my program to take this into account.

 The new versions still does not compile:

 Expr.hs:27:23:
 Couldn't match expected type `Double' with actual type `Bool'
 Expected type: ParsecT
 String () Data.Functor.Identity.Identity (Expr Double)
  Actual type: ParsecT
 String () Data.Functor.Identity.Identity (Expr Bool)
In the first argument of `(|)', namely `pBool'
In the second argument of `(|)', namely `pBool | pEqual'

 Romildo

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




Expr.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Annotations in abstract syntax tree

2012-04-27 Thread Ryan Ingram
For simple datatypes like this, GHC can derive the Functor implementation
for you:

{-# LANGUAGE DeriveFunctor #-}

data ExprF r = 
  deriving (..., Functor)

See http://www.haskell.org/ghc/docs/7.0.4/html/users_guide/deriving.html

  -- ryan

On Fri, Apr 27, 2012 at 5:40 AM, Stefan Holdermans ste...@vectorfabrics.com
 wrote:

 Romildo,

  I could write the (Functor ExprF) instance:
 
   instance Functor ExprF where
 fmap f expr = case expr of
 Num n - Num n
 Var v - Var v
 Bin op x y - Bin op (f x) (f y)

 Yes, excellent. That'll do.

 Cheers,

  Stefan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I Need a Better Functional Language!

2012-04-09 Thread Ryan Ingram
A concurring opinion here, and an example.

iff :: Bol - a - a - a
iff True x _ = x
iff False _ x = x

f, g :: Bool - Bool
f x = x
g x = iff x True False

Are these two functions equal?  I would say yes, they are.  Yet once you
can pattern match on functions, you can easily tell these functions apart,
and create a function

h :: (Bool - Bool) - Bool
such that h f = True but h g = False.

  -- ryan

On Thu, Apr 5, 2012 at 8:52 AM, Dan Doel dan.d...@gmail.com wrote:

 On Thu, Apr 5, 2012 at 10:14 AM, Grigory Sarnitskiy sargrig...@ya.ru
 wrote:
  First, what are 'functions' we are interested at? It can't be the usual
 set-theoretic definition, since it is not constructive. The constructive
 definition should imply functions that can be constructed, computed. Thus
 these are computable functions that should be of our concern. But
 computable functions in essence are just a synonym for programs.

 This is a flawed premise. The point of working with functions is
 abstraction, and that abstraction is given by extensional equality of
 functions:

f = g  iff  forall x. f x = g x

 So functions are not synonymous with programs or algorithms, they
 correspond to an equivalence class of algorithms that produce the same
 results from the same starting points. If you can access the source of
 functions within the language, this abstraction has been broken.

 And this abstraction is useful, because it allows you to switch freely
 between programs that do the same thing without having to worry that
 someone somewhere is relying on the particular algorithm or source.
 This is the heart of optimization and refactoring and the like.

 There are places for metaprogramming, or perhaps even a type of
 algorithms that can be distinguished by means other than the functions
 they compute. But to say that functions are that type is false, and
 that functions should mean that is, I think, wrong headed.

 -- Dan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-20 Thread Ryan Ingram
This instance can be made more general without changing the code; change
the first line to

instance Num a = Num (e - a) where

I think this version doesn't even require FlexibleInstances.

This lets you do

f x = if x then 2 else 3
g x = if x then 5 else 10

-- f + g = \x - if x then 7 else 13

  -- ryan

On Mon, Mar 19, 2012 at 10:38 AM, Ozgur Akgun ozgurak...@gmail.com wrote:

 Hi,

 If you are feeling adventurous enough, you can define a num instance for
 functions:

 {-# LANGUAGE FlexibleInstances #-}

 instance Num a = Num (a - a) where
 f + g = \ x - f x + g x
 f - g = \ x - f x - g x
 f * g = \ x - f x * g x
 abs f = abs . f
 signum f = signum . f
 fromInteger = const . fromInteger

 ghci let f x = x * 2
 ghci let g x = x * 3
 ghci (f + g) 3
 15
 ghci (f+g+2) 2
 17

 HTH,
 Ozgur


 On 19 March 2012 16:57, sdiy...@sjtu.edu.cn wrote:

 By arithmetic I mean the everyday arithmetic operations used in
 engineering.
 In signal processing for example, we write a lot of expressions like
 f(t)=g(t)+h(t)+g'(t) or f(t)=g(t)*h(t).
 I feel it would be very natural to have in haskell something like
   g::Float-Float
   --define g here
   h::Float-Float
   --define h here
   f::Float-Float
   f = g+h --instead of f t = g t+h t
   --Of course, f = g+h is defined as f t = g t+h t



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-20 Thread Ryan Ingram
Oh man, I came late to this party.

I'll throw what little weight I have here behind Jerry's argument.  Yes,
this change to base is not Haskell2010 compatible, but it's still a good
change and I hope that Haskell2012 or 2013 or whatever the next version of
the standard that comes out incorporates it.

As to why it's a good change:

(1) People were doing it anyways with bogus Eq instances; the syntactic
benefit of being able to use integer literals is huge; using the standard
+/-/* etc functions is a nice bonus.  For an example, see
http://twanvl.nl/blog/haskell/simple-reflection-of-expressions
(2) Pattern matching on numeric literals is what requires Eq, and combined
with (1) leads to fragile code.  Now, for example,

fac 0 = 1
fac n = n * fac (n-1)

Now the type of fac explicitly states that it requires Eq to work; with the
'hack' version of Eq in the expressions above, fac x doesn't terminate
and instead gives x * (x-1) * (x-1-1) * ... forever.  Other versions (like
the version in this thread with Num (e - a)) turn fac into a function that
always returns bottom.

  -- ryan

On Tue, Mar 20, 2012 at 12:02 PM, Ryan Ingram ryani.s...@gmail.com wrote:

 This instance can be made more general without changing the code; change
 the first line to

 instance Num a = Num (e - a) where

 I think this version doesn't even require FlexibleInstances.

 This lets you do

 f x = if x then 2 else 3
 g x = if x then 5 else 10

 -- f + g = \x - if x then 7 else 13

   -- ryan

 On Mon, Mar 19, 2012 at 10:38 AM, Ozgur Akgun ozgurak...@gmail.comwrote:

 Hi,

 If you are feeling adventurous enough, you can define a num instance for
 functions:

 {-# LANGUAGE FlexibleInstances #-}

 instance Num a = Num (a - a) where
 f + g = \ x - f x + g x
 f - g = \ x - f x - g x
 f * g = \ x - f x * g x
 abs f = abs . f
 signum f = signum . f
 fromInteger = const . fromInteger

 ghci let f x = x * 2
 ghci let g x = x * 3
 ghci (f + g) 3
 15
 ghci (f+g+2) 2
 17

 HTH,
 Ozgur


 On 19 March 2012 16:57, sdiy...@sjtu.edu.cn wrote:

 By arithmetic I mean the everyday arithmetic operations used in
 engineering.
 In signal processing for example, we write a lot of expressions like
 f(t)=g(t)+h(t)+g'(t) or f(t)=g(t)*h(t).
 I feel it would be very natural to have in haskell something like
   g::Float-Float
   --define g here
   h::Float-Float
   --define h here
   f::Float-Float
   f = g+h --instead of f t = g t+h t
   --Of course, f = g+h is defined as f t = g t+h t



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theoretical question: are side effects necessary?

2012-03-16 Thread Ryan Ingram
You can emulate mutation with at most O(log(n)) penalty using a map.  Given
that memory is of fixed size, log2(n) = 64, so for real-world programs
this becomes O(1).  So any program you can implement using mutation can be
implemented in a pure language with the same big-O running time (but much
much worse constant factors, given this naive translation).

Other external state is harder to emulate.  For example, communication over
a network most definitely requires some concept of a 'computation with side
effects' since the network's response could change from request to request.

In GHC, even IO objects are pure, but they conceptually represent functions
that modify the state of reality.

  -- ryan

On Fri, Mar 16, 2012 at 5:23 AM, Christopher Svanefalk 
christopher.svanef...@gmail.com wrote:

 Dear all,

 there is a question I have been thinking about a bit. In short, we could
 simply formulate it like this:

 Are there any problems which *cannot *be solved a side effect-free
 language (such as Haskell)? In other words, are there problems that would
 explicitly demand semantics that can only be provided by a language
 allowing direct modification of external state variables, such as Java and
 C++?

 If not, are there problems which are simply *infeasible *to solve with a
 side effect-free language?

 I realize this question is very broad, and I am not expecting an exact
 answer by any means. I am asking since I am curious about the relation
 between functional and imperative/procedural languages in general. I
 primarily come from a Java background, but I can program Haskell and
 Erlang, and have recently started exploring Scala, so this would be
 interesting to know.

 --
 Best,

 Christopher Svanefalk

 Student,
 Department of Computer Science and Engineering
 University of Gothenburg / Chalmers University of Technology


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a better way to subtyping?

2012-03-13 Thread Ryan Ingram
data Common = ...
data A = ...
data B = ...
data C = ...
data Super =
SubA { commonFields :: Common, getA :: A }
| SubB { commonFields :: Common, getB :: B }
| SubC { commonFields :: Common, getC :: C }

foldWithSubtype :: (A - r) - (B - r) - (C - r) - Super - r
foldWithSubtype k _ _ (SubA {getA = a}) = k a
foldWithSubtype _ k _ (SubB {getB = b}) = k b
foldWithSubtype _ _ k (SubC {getC = c}) = k c

foldSuper :: (A - Common - r) - (B - Common - r) - (C - Common - r)
- Super - r
foldSuper ka kb kc sup = foldWithSubtype ka kb kc sup $ commonFields sup


On Mon, Mar 12, 2012 at 8:32 AM, Jeff Shaw shawj...@msu.edu wrote:

 More specifically, if I have a record type from which I construct multiple
 sub-record types, and I want to store these in a collection which I want to
 map over while preserving the ability to get at the sub-fields, is there a
 better way to do it than to have an enumeration for the sub-types and then
 use Dynamic? I also have a nastier version that doesn't require the
 enumeration, which throws an exception when fromDynamic can't return a
 value with one of the expected types.

 {-# LANGUAGE Rank2Types, DeriveDataTypeable #-}
 module Super where

 import Data.Dynamic
 import Data.Typeable
 import Data.Maybe

 data Super a = Super { commonFields :: (), subFields :: a }
deriving Typeable

 data SubTypes = SubA | SubB | SubC

 data A = A { aFields :: () }
deriving Typeable

 data B = B { bFields :: () }
deriving Typeable

 data C = C { cFields :: () }
deriving Typeable

 doSomethingWithSubType :: (Super A - ()) - (Super B - ()) - (Super C
 - ()) - (SubTypes, Dynamic) - Maybe ()
 doSomethingWithSubType a _ _ (SubA, dynamic) = fromDynamic dynamic =
 return . a
 doSomethingWithSubType _ b _ (SubB, dynamic) = fromDynamic dynamic =
 return . b
 doSomethingWithSubType _ _ c (SubC, dynamic) = fromDynamic dynamic =
 return . c

 doSomethingWithSubType2 :: (Super A - ()) - (Super B - ()) - (Super C
 - ()) - Dynamic - ()
 doSomethingWithSubType2 a b c dynamic =
let dynamicAsA = fromDynamic dynamic :: Maybe (Super A)
dynamicAsB = fromDynamic dynamic :: Maybe (Super B)
dynamicAsC = fromDynamic dynamic :: Maybe (Super C) in
head $ catMaybes [ dynamicAsA = return . a
 , dynamicAsB = return . b
 , dynamicAsC = return . c]


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] using mutable data structures in pure functions

2012-03-13 Thread Ryan Ingram
On Sun, Mar 11, 2012 at 8:38 PM, E R pc88m...@gmail.com wrote:

 A pure function can allocate and modify memory as long as a) it never
 returns a reference to the memory or b) it never again modifies the
 memory once it returns (i.e. it returns an immutable object).


That's a reasonable first approximation to the problem, yes.  It gets a bit
more complicated due to laziness (what if the mutation gets delayed until
some later part of the output is examined?)

So, again, what is the Haskell philosophy towards using mutable data
 structures in pure functions? Is it:

 1. leave it to the compiler to find these kinds of opportunities
 2. just use the immutable data structures - after all they are just as
 good (at least asymptotically)
 3. you don't want to use mutable data structures because of _
 4. it does happen, and some examples are __


There are two ways in which Haskell encourages the use of mutable data
structures in a pure way.

The first is in the inherent mutation caused by laziness.  For example:

type Positive = Integer

-- trie of binary representation of positive number
-- [1] - tOne
-- x ++ [0] - lookup x . tEven
-- x ++ [1] - lookup x . tOdd
data Trie a = Trie {
tOne :: a,
tEven :: NatTrie a,
tOdd :: NatTrie a
}

lookupTrie :: Trie a - Positive - a
lookupTrie t 1 = tOne t
lookupTrie t n
| even n = lookupTrie (tEven t) (n `div` 2)
| otherwise = lookupTrie (tOdd t) (n `div` 2) -- div drops remainder

makeTrie :: (Positive - a) - Trie a
makeTrie f = Trie (f 1) e o where
e = makeTrie $ \n - f (2*n)
o = makeTrie $ \n - f (2*n + 1)

memoize :: (Positive - a) - (Positive - a)
memoize = lookupTrie . makeTrie

collatz_rec :: (Positive - Integer) - Positive - Integer
collatz_rec f 1 = 0
collatz_rec f n
| even n = 1 + f (n `div` 2)
| otherwise = 1 + f (3*n + 1)

collatz = memoize (collatz_rec collatz)

In this case, makeTrie creates a thunk, and it's only evaluated where
requested by lookupTrie.  You can call collatz at many different values and
later calls will be much faster, as the mutation caused by lazy evaluation
'remembers' the values.

The second is by explicitly documenting that you are using a temporarily
mutable structure, which is the ST monad:

instance Monad (ST s)
newSTRef :: a - ST s (STRef s a)
readSTRef :: STRef s a - ST s a
writeSTRef :: STRef s a - a - ST s ()
-- and similar interface for mutable STArrays

runST :: (forall s. ST s a) - a -- note higher rank type

A computation in the ST monad is an impure computation that can modify
memory, but only memory allocated within that same computation.

The higher rank type in runST makes it safe to do so--references from one
ST computation cannot escape to any other ST computation.  So even though
internally some pure value might rely on an impure computation, it's safe
to do so in a pure context.

Here's a sample implementation of ST:

-- DO NOT EXPORT THESE CONSTRUCTORS
newtype ST s a = ST (IO a)
newtype STRef s a = STRef { getRef :: IORef a }

runST :: (forall s. ST s a) - a
runST (ST act) = unsafePerformIO act -- Actually safe!

newSTRef :: a - ST s (STRef s a)
newSTRef a  = ST $ liftM STRef (newIORef a)

readSTRef :: STRef s a - ST s a
readSTRef (STRef r) = ST $ readIORef r

writeSTRef :: STRef s a - a - ST s ()
writeSTRef (STRef r) a = ST $ writeIORef r a

There's some usage examples at http://www.haskell.org/haskellwiki/Monad/ST

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Higher types in contexts

2012-03-06 Thread Ryan Ingram
I find it easy to understand this distinction by writing out the types of
the constructors and case expressions for these objects, in a language like
system F:

(here, {x :: t} means a type argument with name x of kind t)

Exists :: {f :: *-*} - {a :: *} - f a - Exists f
Forall :: {f :: *-*} - ({a :: *} - f a) - Forall f

Notice the higher rank type in the constructor 'Forall'.

Similarly, the case deconstruction for these:

caseExists :: {r :: *} - {f :: *-*} - ({a :: *} - f a - r) - Exists f
- r
caseForall :: {r :: *} - {f :: *-*} - (({a :: *} - f a) - r) - Forall
f - r

The function passed to caseExists needs to be able to handle any type 'a'
we throw at it, whereas the function passed to caseForall can choose what
'a' it wants to use (and can choose multiple different 'a's by calling the
({a::*} - f a) parameter function multiple times.  In the simple case that
the case function only instantiates 'a' at a single type, we can simplify
this:

caseForall' :: {r :: *} - {f :: * - *} - {a :: *} - (f a - r) -
Forall f - r

and this function is definable in terms of caseForall:

caseForall' {r} {f} {a} k v = caseForall {r} {f} (\mk_fa - k (mk_fa {a})) v

  -- ryan

On Mon, Mar 5, 2012 at 9:37 PM, wren ng thornton w...@freegeek.org wrote:

 On 3/5/12 5:13 PM, AntC wrote:

 I've tried that ListFunc wrapping you suggest:
 [...]

 But I can't 'dig out' the H-R function and apply it (not even
 monomorphically):


 That's because the suggestion changed it from a universal quantifier to an
 existential quantifier.

data Exists f = forall a. Exists (f a)

data Forall f = Forall (forall a. f a)

 With Exists, we're essentially storing a pair of the actual type 'a' and
 then the f a itself. We can't just pull out the f a and use it, because we
 don't know what 'a' is. We can bring it into scope temporarily by case
 matching on the Exists f, which allows us to use polymorphic functions, but
 we still don't actually know what it is so we can *only* use polymorphic
 functions.

 Conversely, with Forall we're storing some f a value which is in fact
 polymorphic in 'a'. Here, because it's polymorphic, the caller is free to
 choose what value of 'a' they would like the f a to use. Indeed, they can
 choose multiple different values of 'a' and get an f a for each of them.

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Transactional memory going mainstream with Intel Haswell

2012-02-16 Thread Ryan Ingram
It seems like it would still be useful for *optimizing* the implementation
of STM in Haskell; in particular, small transactions seem like a great way
to implement lock-free data structures by handling the non-composability of
compare-and-swap.

So while you wouldn't implement atomically a by XBEGIN; a; XEND, you
might implement atomically by doing the transaction log as we do now, then
using XBEGIN/XEND around the commit operation.

  -- ryan

On Thu, Feb 9, 2012 at 11:12 AM, Austin Seipp mad@gmail.com wrote:

 Duncan Coutts talked a bit about this on Reddit here:


 http://www.reddit.com/r/programming/comments/pfnkx/intel_details_hardware_transactional_memory/c3p4oq7

 On Thu, Feb 9, 2012 at 12:43 PM, Ben midfi...@gmail.com wrote:
 
 http://arstechnica.com/business/news/2012/02/transactional-memory-going-mainstream-with-intel-haswell.ars
 
  would any haskell STM expert care to comment on the possibilities of
 hardware acceleration?
 
  best, ben
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Regards,
 Austin

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monadic bind fixity: do vs ()

2012-02-16 Thread Ryan Ingram
The desugaring is simpler with the current setup:

do { e }
  = e
do { let p = e; STMTS }
  = let p = e in (do { STMTS })
do { e; STMTS }
  = e  (do { STMTS })
do { p - e; STMTS }
  = e = \x - case x of { p - (do { STMTS }) ; _ - fail pattern match
failure }
   [x is a fresh variable]

My guess is that  is infixl because
  (1) m = f = g should make sense
  (2)  should match fixity and precedence with =

On Tue, Feb 14, 2012 at 9:50 PM, Michael Baikov manpac...@gmail.com wrote:

 Most docs ([1], [2]) about do-notation syntactic sugar tends  to
 describe following expressions as equivalent:

 do { a; b; c }  and a  b  c, but they are not: first one gets
 de-sugared into  a  (b  c), second one is equivalent to (a 
 b)  c, because () is declared using infixl.

 This should not be a problem, monadic law of Associativity states that
 (m = f) = g  ≡  m = (\x - f x = g), but this leads to
 generating different Core output and may lead to different performance
 (and it does, do { Just 4 ; Just 4 ... } is about 2% faster than Just
 4  Just 4  ... if compiled with -O0, but 13% slower when compiled
 with -O11)

 This also leads to lots of fun when your monad  breaks Associativity law :)

 Is there any reasons except for those 13% speed gain for this?

 [1]: http://en.wikibooks.org/wiki/Haskell/do_Notation
 [2]: http://book.realworldhaskell.org/read/monads.html#monads.dot

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Natural Transformations and fmap

2012-01-27 Thread Ryan Ingram
I know a bit of category theory, but I'm trying to look at it from a
fundamental perspective; I know that I intend (m :- n) to mean natural
transformation from functor m to functor n, but the type itself (forall x.
m x - n x) doesn't necessarily enforce that.

However, the type of natural transformations comes with a free theorem, for
example

   concat :: [[a]] - [a]

has the free theorem

   forall f :: a - b, f strict and total, fmap f . concat = concat . fmap
(fmap f)

The strictness condition is needed; consider

   broken_concat :: [[a]] - [a]
   broken_concat _ = [undefined]
   f = const ()

   fmap f (broken_concat []) = fmap f [undefined] = [()]
   broken_concat (fmap (fmap f) []) = broken_concat [] = [undefined]

The 'taming selective strictness' version of the free theorem generator[1]
allows removing the totality condition on f, but not the strictness
condition.

But in the case of concat, you can prove a stronger theorem:

   forall f :: a - b, fmap f . concat = concat . fmap (fmap f)

My suspicion is that this stronger theorem holds for all strict and total
natural transformations, but I don't know how to go about proving that
suspicion.  I'm a hobbyist mathematician and a professional programmer, not
the other way around :)

I think it's probably easy to prove that the monoid laws imply that mult'
and one' are strict and total.

 Thus, you can in principle define plenty of natural transformations which
do not have the type f :: forall X. M X - N X.

Can you suggest one?  I don't see how you can get around f needing to act
at multiple types since it can occur before and after g's fmap:

fmap g . f = f . fmap g

   M A --fmap_M g-- M B
| |
   f_A   f_B
| |
v v
   N A --fmap_N g-- N B

You can have n = m, of course, but that just means f :: M :- M.

  -- ryan

[1] http://www-ps.iai.uni-bonn.de/cgi-bin/polyseq.cgi
Use this term:
/\a. let flipappend =(\xs :: [a]. fix (\rec :: [a] - [a]. \ys :: [a].
  case ys of { [] - xs; y:zs - y : rec zs })) in
 let concat = fix (\rec :: [[a]] - [a]. \xss :: [[a]].
   case xss of { [] - []_{a}; xs:yss - flipappend
(rec yss) xs}) in
 concat

[2] See http://hpaste.org/56903  Summary:

-- both of these types have obvious Functor instances
newtype (f :. g) x = O (f (g x))
data Id x = Id x

class Functor m = Monad m where
   one'  :: Id   :- m
   mult' :: (m :. m) :- m

-- instances are required to satisfy monoid laws:
--one' is a left/right identity for mult':
--forall x :: m a
--mult' . O . one' . Id $ x = x = mult' . O . fmap (one' . Id)
$ x
--mult' is associative:
--forall x :: m (m (m a))).
--mult' . O . mult' . O $ x = mult' . O . fmap (mult' . O) $ x

On Thu, Jan 26, 2012 at 9:30 PM, wren ng thornton w...@freegeek.org wrote:

 On 1/23/12 10:39 PM, Ryan Ingram wrote:

 type m :-  n = (forall x. m x -  n x)
 class Functor f where fmap :: forall a b. (a -  b) -  f a -  f b
 -- Functor identity law: fmap id = id
 -- Functor composition law fmap (f . g) = fmap f . fmap g

 Given Functors m and n, natural transformation f :: m :- n, and g :: a -
 b, how can I prove (f . fmap_m g) = (fmap_n g . f)?


 That is the defining property of natural transformations. To prove it for
 polymorphic functions in Haskell you'll probably want to leverage
 parametricity.


 I assume you don't know category theory, based on other emails in this
 thread. But the definition of a natural transformation is that it is a
 family of morphisms/functions { f_X :: M X - N X | X an object/type } such
 that for all g :: a - b we have that f_b . fmap_m g == fmap_n g . f_a

 Thus, you can in principle define plenty of natural transformations which
 do not have the type f :: forall X. M X - N X. The only requirement is
 that the family of morphisms obeys that equation. It's nice however that if
 a function has that type, then it is guaranteed to satisfy the equation (so
 long as it doesn't break the rules by playing with strictness or other
 things that make it so Hask isn't actually a category).

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Natural Transformations and fmap

2012-01-26 Thread Ryan Ingram
I tried the free theorem generator (
http://www-ps.iai.uni-bonn.de/cgi-bin/free-theorems-webui.cgi) and it
wouldn't let me use generic functors, but playing with [] and Maybe leads
me to believe that the free theorem for :- is

forall f :: m :- n, forall g :: a - b, g strict and total
fmap g . f = f . fmap g

This implies that the monad laws don't necessarily hold in situations like
\m - m = const Nothing, which seems wrong to me.  The counterexamples (
http://www-ps.iai.uni-bonn.de/cgi-bin/exfind.cgi), however, all rely on
odd natural transformations like (\_ - Just undefined).  My guess is
that there is a side condition we can put on f that is implied by the
monoid laws which doesn't require g to be strict or total.

  -- ryan

On Mon, Jan 23, 2012 at 10:23 PM, Brent Yorgey byor...@seas.upenn.eduwrote:

 On Mon, Jan 23, 2012 at 09:06:52PM -0800, Ryan Ingram wrote:
  On Mon, Jan 23, 2012 at 8:05 PM, Daniel Fischer 
  daniel.is.fisc...@googlemail.com wrote:
 
   On Tuesday 24 January 2012, 04:39:03, Ryan Ingram wrote:
At the end of that paste, I prove the three Haskell monad laws from
 the
functor laws and monoid-ish versions of the monad laws, but my
 proofs
all rely on a property of natural transformations that I'm not sure
 how
to prove; given
   
type m :- n = (forall x. m x - n x)
class Functor f where fmap :: forall a b. (a - b) - f a - f b
-- Functor identity law: fmap id = id
-- Functor composition law fmap (f . g) = fmap f . fmap g
   
Given Functors m and n, natural transformation f :: m :- n, and g
 :: a
- b, how can I prove (f . fmap_m g) = (fmap_n g . f)?
  
   Unless I'm utterly confused, that's (part of) the definition of a
 natural
   transformation (for non-category-theorists).
  
 
  Alright, let's pretend I know nothing about natural transformations and
  just have the type declaration
 
  type m :- n = (forall x. m x - n x)
 
  And I have
  f :: M :- N
  g :: A - B
  instance Functor M -- with proofs of functor laws
  instance Functor N -- with proofs of functor laws
 
  How can I prove
fmap g. f :: M A - N B
=
f . fmap g :: M A - N B
 
  I assume I need to make some sort of appeal to the parametricity of
  M :- N.

 This is in fact precisely the free theorem you get from the
 parametricity of f.  Parametricity means that f must act uniformly
 for all x -- which is an intuitive way of saying that f really is a
 natural transformation.

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Natural Transformations and fmap

2012-01-23 Thread Ryan Ingram
I've been playing around with the relationship between monoids and monads
(see
http://www.jonmsterling.com/posts/2012-01-12-unifying-monoids-and-monads-with-polymorphic-kinds.htmland
http://blog.sigfpe.com/2008/11/from-monoids-to-monads.html), and I put
together my own implementation which I'm quite happy with, that you can see
at http://hpaste.org/56903 ; relying only on the extensions RankNTypes,
TypeOperators, NoImplicitPrelude, ScopedTypeVariables;

At the end of that paste, I prove the three Haskell monad laws from the
functor laws and monoid-ish versions of the monad laws, but my proofs all
rely on a property of natural transformations that I'm not sure how to
prove; given

type m :- n = (forall x. m x - n x)
class Functor f where fmap :: forall a b. (a - b) - f a - f b
-- Functor identity law: fmap id = id
-- Functor composition law fmap (f . g) = fmap f . fmap g

Given Functors m and n, natural transformation f :: m :- n, and g :: a -
b, how can I prove (f . fmap_m g) = (fmap_n g . f)?  Is there some more
fundamental law of natural transformations that I'm not aware of that I
need to use?  Is it possible to write a natural transformation in Haskell
that violates this law?

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Natural Transformations and fmap

2012-01-23 Thread Ryan Ingram
On Mon, Jan 23, 2012 at 8:05 PM, Daniel Fischer 
daniel.is.fisc...@googlemail.com wrote:

 On Tuesday 24 January 2012, 04:39:03, Ryan Ingram wrote:
  At the end of that paste, I prove the three Haskell monad laws from the
  functor laws and monoid-ish versions of the monad laws, but my proofs
  all rely on a property of natural transformations that I'm not sure how
  to prove; given
 
  type m :- n = (forall x. m x - n x)
  class Functor f where fmap :: forall a b. (a - b) - f a - f b
  -- Functor identity law: fmap id = id
  -- Functor composition law fmap (f . g) = fmap f . fmap g
 
  Given Functors m and n, natural transformation f :: m :- n, and g :: a
  - b, how can I prove (f . fmap_m g) = (fmap_n g . f)?

 Unless I'm utterly confused, that's (part of) the definition of a natural
 transformation (for non-category-theorists).


Alright, let's pretend I know nothing about natural transformations and
just have the type declaration

type m :- n = (forall x. m x - n x)

And I have
f :: M :- N
g :: A - B
instance Functor M -- with proofs of functor laws
instance Functor N -- with proofs of functor laws

How can I prove
  fmap g. f :: M A - N B
  =
  f . fmap g :: M A - N B

I assume I need to make some sort of appeal to the parametricity of M :- N.



  Is there some
  more fundamental law of natural transformations that I'm not aware of
  that I need to use?  Is it possible to write a natural transformation
  in Haskell that violates this law?
 
-- ryan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Idris

2012-01-20 Thread Ryan Ingram
Has anyone played with Idris (http://idris-lang.org/) at all?  It looks
interesting, and I'd love to play with it, but unfortunately I only have
windows machines up and running at the moment and the documentation seems
to imply it only builds on unixy systems.

I'm curious how difficult it would be to get a win32 implementation up and
running.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-20 Thread Ryan Ingram
I don't currently have anything to add to this discussion, but I want to
encourage you all to keep having it because I think it has potential to
improve the language in the do things right or don't do them at all
philosophy that Haskell tends towards.

  -- ryan

On Fri, Jan 20, 2012 at 6:32 AM, Jacques Carette care...@mcmaster.cawrote:

 On 19/01/2012 10:19 PM, Edward Z. Yang wrote:

  In other words,
 MonadZero has no place in dealing with pattern match failure!

  I completely agree.  See Bimonadic semantics for basic pattern matching
 calculi [1] for an exploration of just that.  In the language of that
 paper, the issue is that there is a monad of effects for actions, and a
 monad of effects for pattern matching, and while these are very lightly
 related, they really are quite different.  By varying both monads, one can
 easily vary through a lot of different behaviour for pattern-matching as
 found in the literature.

 I should add that if we had known about some of the deeper structures of
 pattern matching (as in Krishnaswami's Focusing on Pattern Matching [2],
 published 3 years *later*), we could have simplified our work.

 Jacques

 [1] http://www.cas.mcmaster.ca/~**kahl/Publications/Conf/Kahl-**
 Carette-Ji-2006a.htmlhttp://www.cas.mcmaster.ca/%7Ekahl/Publications/Conf/Kahl-Carette-Ji-2006a.html
 [2] 
 http://www.cs.cmu.edu/~neelk/**pattern-popl09.pdfhttp://www.cs.cmu.edu/%7Eneelk/pattern-popl09.pdf


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ST not strict enough?

2011-11-21 Thread Ryan Ingram
On Fri, Nov 18, 2011 at 4:05 AM, Yves Parès limestr...@gmail.com wrote:

 (Sorry for the double mail)
 ...so there is no way to do that inside the function passed to modifySTRef?
 In other words, there is no way to ensure *inside* a function that its
 result will be evaluated strictly?


modifySTRef looks like this (sugared up a bit):

modifySTRef r f = do
x - readSTRef r
writeSTRef r (f x)

Note that this writes the *thunk* (f x) to the STRef, and there is
absolutely nothing you can do about it.  f isn't demanded by this
function, let alone f x.

You can implement your own strict modifySTRef' easily though, as other
people in this thread have shown.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Vector.Unboxed

2011-11-11 Thread Ryan Ingram
If the internal vectors are fixed size, you can easily write a wrapper
around Vector Int that converts (Int,Int) indices into indices in the
sub-vector.

If the internal vectors have dynamic size, you can't declare an Unbox
instance, because pointers can't be unboxed; unboxed types are opaque to
the garbage collector.

At a low level, Vector Int is

  Vector Word# Word# ByteArray#

where Word# are machine words and ByteArray# is like 'const char *' that is
understood by the ghc garbage collector.

On Wed, Nov 9, 2011 at 1:56 AM, kaffeepause73 kaffeepaus...@yahoo.dewrote:

 Hello,

 quick question about unboxed Vectors :

 Is it possible to create an unboxed vector of unboxed vector ? :

  import qualified Data.Vector.Unboxed as V
  type UnboxedNestedVextor =  V.Vector (V.Vector Int)

 Alternatively I would have to use:

  import qualified Data.Vector.Unboxed as V
  import qualified Data.Vector as VB

 type UnboxedNestedVextor =  VB.Vector (V.Vector Int)

 Is there a rule of thumb how much quicker Unboxed Vectors are ?

 Cheers Phil


 --
 View this message in context:
 http://haskell.1045720.n5.nabble.com/Data-Vector-Unboxed-tp4977289p4977289.html
 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] arr considered harmful

2011-11-01 Thread Ryan Ingram
On Tue, Nov 1, 2011 at 3:36 AM, Serguey Zefirov sergu...@gmail.com wrote:

 2011/11/1 Ryan Ingram ryani.s...@gmail.com:
 Would you mind give me some examples on how you desribe real circuits
 with that abstraction and, especially, an Arrow instance (even
 imaginary one)?


Sure, here's a simple SR latch:

nor :: Circuit (Bool,Bool) Bool
nor = Or `Then` Not

rs :: Circuit (Bool,Bool) (Bool,Bool)
rs = proc (r,s) - do
rec
q - nor - (r, q')
q' - nor - (s, q)
id - (q,q')

instance Category Circuit where
   id = Wire
   (.) = flip Then

instance GArrow Circuit where
ga_first = First  -- Circuit a b - Circuit (a,c) (b,c)
ga_second = Second  -- Circuit a b - Circuit(c,a) (c,b)
ga_cancelr = Cancel -- Circuit (a,()) a
ga_cancell = Swap `Then` Cancel -- Circuit ((),a) a
ga_uncancelr = Uncancel -- Circuit a (a, ())
ga_uncancell = Uncancel `Then` Swap -- Circuit a ((),a)
ga_assoc  = AssocL -- Circuit ((a,b),c)) (a,(b,c))
ga_unassoc = AssocR -- Circuit (a,(b,c)) ((a,b),c)

instance GArrowDrop Circuit where
ga_drop = Ground -- Circuit a ()

instance GArrowCopy Circuit where
ga_copy = Split -- Circuit a (a,a)

instance GArrowSwap Circuit where
ga_swap = Swap -- Circuit (a,b) (b,a)

instance GArrowLoop Circuit where
ga_loop = Loop -- Circuit (a,c) (b,c) - Circuit a b

which would turn into something like

rs =
  -- (r,s)
  Loop (
  -- Input plumbing
  -- ((r,s),(q_in,q'_in))
  AssocL `Then`
  -- (r, (s, (q_in,q'_in))
  Second (
  -- (s, (q_in,q'_in))
  Second swap `Then`
  -- (s, (q'_in,q_in))
  AssocR `Then` First Swap `Then` AssocL
  -- (q'_in, (s,q_in))
  ) `Then`
  -- (r, (q'_in, (s,q_in)))
  AssocR `Then`
  -- ((r,q'_in), (s,q_in))

  -- Computation!
  First (Or `Then` Not) `Then`  -- from nor
  -- (q, (s,q_in))

  Second (Or `Then` Not) `Then`  -- from nor
  -- (q, q')

  -- Output plumbing
  Split
  -- ((q,q'), (q,q'))
  ) `Then`
  -- (q,q')
  Wire  -- from id

I am interested because I thought about an approach like that and
 found it not easy to use one. So I stuck with monadic netlists.


When I did some circuit generation for the ICFP contest last year, I also
went with monadic netlists.  But I had problems coming up with
compile-time-enforcable guarantees like 'this wire is only used once'

And really, this description is for more than circuit generation; it
applies to any sort of computation that you might want to create with
arrows.  I was looking at arrowized FRP earlier this year and was
frustrated by how impossible it was to optimize the resulting dataflow
networks.  I was continually plagued by the appearance of 'arr' in my
networks, in situations where I didn't think it belonged, and with no good
way to see what that 'arr' was actually doing.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] arr considered harmful

2011-11-01 Thread Ryan Ingram
On Mon, Oct 31, 2011 at 6:52 PM, Paterson, Ross r.pater...@city.ac.ukwrote:


 If you require the circuit to be parametric in the value types, you can
 limit the types of function you can pass to arr to simple plumbing.
 See the netlist example at the end of my Fun of Programming slides (
 http://www.soi.city.ac.uk/~ross/papers/fop.html).


That's a neat trick, Ross!  It seems really similar to using parametricity
to recover the insides of lambdas in PHOAS metaprogramming:

-- HOAS expression language
data Expr (v :: * - *) a where
Ap :: Expr v (a - b) - Expr v a - Expr v b
Var :: v a - Expr v a
Lam :: (v a - Expr v b) - Expr v (a - b)

-- some expressions that are paremetric in the variable type
ex_id :: Expr v (a - a)
ex_id = Lam $ \x - Var x

ex_const :: Expr v (a - b - a)
ex_const = Lam $ \x - Lam $ \y - Var x

-- a print function that relies on parametricity to expose the insides of
the functions inside Lam
printExpr :: (forall v. Expr v a) - String
printExpr e = pe_helper vars 0 e  where
   vars = map (:[]) ['a' .. 'z'] ++ map (\n - t ++ show n) [1..]

prec_lam = 1
prec_ap = 2

newtype VarName a = VarName String
pe_helper :: [String] - Expr VarName a - Int - ShowS
pe_helper fv prec (Var (VarName s)) = showString s
pe_helper fv prec (Ap x y) = showParen (prec  prec_ap) (pe_helper fv
prec_ap x . showString   . pe_helper fv (prec_ap+1) y)
pe_helper (v:fv) prec (Lam k) = showParen (prec  prec_lam) (showString \
. showString v . showString  -  . pe_helper fv prec_lam e)
   where e = k (VarName v)

-- some test cases
test1 = printExpr ex_const -- \a - \b - a
test2 = printExpr (ex_id `Ap` ex_const) -- (\a - a) (\a - \b - a)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to implement digital filters using Arrows

2011-11-01 Thread Ryan Ingram
Try

swap p = (snd p, fst p)

or, equivalently

swap ~(x,y) = (y,x)

  -- ryan

On Tue, Nov 1, 2011 at 1:30 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi John,

 I'm trying to use the GHCI debugger on this code:

  20 instance ArrowLoop SF where
  21 loop (SF f) = SF $ \as -
  22 let (bs, cs) = unzip (f (zip as (stream cs))) in bs
  23   where stream ~(x:xs) = x : stream xs
  24
  25 swap :: (a,b) - (b,a)
  26 swap (x,y) = (y,x)

 in order to watch the recursion of the `loop' function unfold.
 However, when I single step through the code, I never stop on line 22
 (where I could, presumably, peek in at `bs' and `cs', in order to see
 them develop):

 *SF :break swap
 Breakpoint 1 activated at SF.hs:26:1-18
 *SF runSF (loop (arr swap)) [1,2,3]
 Stopped at SF.hs:26:1-18
 _result :: (b, a) = _
 [SF.hs:26:1-18] *SF :step
 Stopped at SF.hs:26:14-18
 _result :: (b, a) = _
 x :: a = _
 y :: b = _
 [SF.hs:26:14-18] *SF :
 [1Stopped at SF.hs:23:34-42
 _result :: [a] = _
 xs :: [a] = _
 [SF.hs:23:34-42] *SF :
 Stopped at SF.hs:23:13-42
 _result :: [a] = _
 [SF.hs:23:13-42] *SF :
 Stopped at SF.hs:23:30-42
 _result :: [a] = _
 x :: a = _
 xs :: [a] = _
 [SF.hs:23:30-42] *SF :
 (Pattern repeats.)

 Do you have any advice?

 Thanks,
 -db



 On Mon, Oct 31, 2011 at 3:19 PM, John Lask jvl...@hotmail.com wrote:
  On 1/11/2011 1:35 AM, Captain Freako wrote:
 
  you need to study ArrowLoop and understand that. In the code

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to implement digital filters using Arrows

2011-11-01 Thread Ryan Ingram
Never mind, I misread the code, 'zip' and the lazy definition of stream
should add the necessary laziness.

  -- ryan

On Tue, Nov 1, 2011 at 3:36 PM, Ryan Ingram ryani.s...@gmail.com wrote:

 Try

 swap p = (snd p, fst p)

 or, equivalently

 swap ~(x,y) = (y,x)

   -- ryan


 On Tue, Nov 1, 2011 at 1:30 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi John,

 I'm trying to use the GHCI debugger on this code:

  20 instance ArrowLoop SF where
  21 loop (SF f) = SF $ \as -
  22 let (bs, cs) = unzip (f (zip as (stream cs))) in bs
  23   where stream ~(x:xs) = x : stream xs
  24
  25 swap :: (a,b) - (b,a)
  26 swap (x,y) = (y,x)

 in order to watch the recursion of the `loop' function unfold.
 However, when I single step through the code, I never stop on line 22
 (where I could, presumably, peek in at `bs' and `cs', in order to see
 them develop):

 *SF :break swap
 Breakpoint 1 activated at SF.hs:26:1-18
 *SF runSF (loop (arr swap)) [1,2,3]
 Stopped at SF.hs:26:1-18
 _result :: (b, a) = _
 [SF.hs:26:1-18] *SF :step
 Stopped at SF.hs:26:14-18
 _result :: (b, a) = _
 x :: a = _
 y :: b = _
 [SF.hs:26:14-18] *SF :
 [1Stopped at SF.hs:23:34-42
 _result :: [a] = _
 xs :: [a] = _
 [SF.hs:23:34-42] *SF :
 Stopped at SF.hs:23:13-42
 _result :: [a] = _
 [SF.hs:23:13-42] *SF :
 Stopped at SF.hs:23:30-42
 _result :: [a] = _
 x :: a = _
 xs :: [a] = _
 [SF.hs:23:30-42] *SF :
 (Pattern repeats.)

 Do you have any advice?

 Thanks,
 -db



 On Mon, Oct 31, 2011 at 3:19 PM, John Lask jvl...@hotmail.com wrote:
  On 1/11/2011 1:35 AM, Captain Freako wrote:
 
  you need to study ArrowLoop and understand that. In the code

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to implement digital filters using Arrows

2011-11-01 Thread Ryan Ingram
First, let's lay out our definitions:

unzip [] = ([], [])
unzip ((x,y):xys) = (x:xs, y:ys) where (xs,ys) = unzip xys

zip [] _ = []
zip _ [] = []
zip (x:xs) (y:ys) = (x,y) : zip xs ys

map _ [] = []
map f (x:xs) = f x : map f xs

stream ~(a:as) = a : stream as
-- equivalently
stream xs = head xs : stream (tail xs)

Now we want to evaluate this:
runSF (loop (arr swap))  [1,2,3]

Lets simplify some of the insides a bit:
arr swap
  = SF $ map swap
  = SF $ map (\(x,y)-(y,x))

loop (arr swap)
  = SF $ \as -
let (bs,cs) = unzip (map swap (zip as (stream cs))) in bs

runSF (loop (arr swap))  [1,2,3]
  = runSF (SF $ ...) [1,2,3]
  = (\as - let (bs,cs) = unzip (map swap (zip as (stream cs))) in bs)
[1,2,3]

Here is our heap at this point; we are trying to evaluate bs:

p = unzip (map swap (zip as (stream cs)))
as = [1,2,3]
bs = fst p
cs = snd p

snd p forces p, unzip forces its argument, map forces its second argument,
and zip forces both its arguments.
So now we have:

p = unzip retmap
retmap = map swap retzip
retzip = zip as retstream
retstream = stream cs
as = [1,2,3]
bs = fst p
cs = snd p

Evaluating further:
retstream = head cs : retstream2
retstream2 = stream (tail cs)
retzip = (1, head cs) : retzip2
retzip2 = zip [2,3] (stream (tail cs))
retmap = (head cs, 1) : retmap2
retmap2 = map swap retzip2
p = (head cs : xs1, 1 : ys1)
(xs1,ys1) = unzip retmap2
bs = head cs : xs1
cs = 1 : ys1
bs = 1 : xs1

and we can now return the first cons cell of bs.

This repeats until we get [1,2,3] back out; note that each value goes
through both sides of the swap before coming out the 'front' again.

  -- ryan

On Tue, Nov 1, 2011 at 1:30 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi John,

 I'm trying to use the GHCI debugger on this code:

  20 instance ArrowLoop SF where
  21 loop (SF f) = SF $ \as -
  22 let (bs, cs) = unzip (f (zip as (stream cs))) in bs
  23   where stream ~(x:xs) = x : stream xs
  24
  25 swap :: (a,b) - (b,a)
  26 swap (x,y) = (y,x)

 in order to watch the recursion of the `loop' function unfold.
 However, when I single step through the code, I never stop on line 22
 (where I could, presumably, peek in at `bs' and `cs', in order to see
 them develop):

 *SF :break swap
 Breakpoint 1 activated at SF.hs:26:1-18
 *SF runSF (loop (arr swap)) [1,2,3]
 Stopped at SF.hs:26:1-18
 _result :: (b, a) = _
 [SF.hs:26:1-18] *SF :step
 Stopped at SF.hs:26:14-18
 _result :: (b, a) = _
 x :: a = _
 y :: b = _
 [SF.hs:26:14-18] *SF :
 [1Stopped at SF.hs:23:34-42
 _result :: [a] = _
 xs :: [a] = _
 [SF.hs:23:34-42] *SF :
 Stopped at SF.hs:23:13-42
 _result :: [a] = _
 [SF.hs:23:13-42] *SF :
 Stopped at SF.hs:23:30-42
 _result :: [a] = _
 x :: a = _
 xs :: [a] = _
 [SF.hs:23:30-42] *SF :
 (Pattern repeats.)

 Do you have any advice?

 Thanks,
 -db



 On Mon, Oct 31, 2011 at 3:19 PM, John Lask jvl...@hotmail.com wrote:
  On 1/11/2011 1:35 AM, Captain Freako wrote:
 
  you need to study ArrowLoop and understand that. In the code

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy A-star search

2011-10-31 Thread Ryan Ingram
On Sun, Oct 30, 2011 at 8:44 AM, Anton Kholomiov
anton.kholom...@gmail.comwrote:

 I'm misunderstanding astar. I've thought that 'whole route'-heuristic
 will prevent algorithm from going in circles. The more you circle around
 the more the whole route distance is.


Sort of.  Consider the tree in my example graph:

A -1- B -1- C -1- D -1- E -9- J
  -2- F -1- C -1- D -1- E -9- J
-2- G -1- D -1- E -9- J
  -2- H -1- E -9- J
-2- I -1- J

There's no circling going on as you depth-first search this tree, even
though you are wasting time visiting the same node multiple times.

However, the thing you know with A*/djikstra is this: If I have visited a
node, there is no shorter path to that node.  So any time I encounter that
node again, I must have at least as long of a path, and so any later nodes
along that path can't be any better along this path.

Effectively, you are pruning the tree:
A -1- B -1- C -1- D -1- E -9- J ***
  -2- F -1- C ***
-2- G -1- D ***
  -2- H -1- E ***
-2- I -1- J GOAL
(*** = pruned branches)

since the second time you visit C, you know the first path was faster, so
there is no reason to continue to visit D/E again.  This is even more
noticable in graphs with multidirectional edges, as the tree is infinitely
deep at every cycle.

I wonder if there is a way to represent this more directly as
tree-pruning.  It's weird, because you are pruning the tree based on
visiting other branches of the tree.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] arr considered harmful

2011-10-31 Thread Ryan Ingram
I know it's a bit of an 'intentionally provocative' title, but with the
recent discussions on Arrows I thought it timely to bring this up.

Most of the conversion from arrow syntax into arrows uses 'arr' to move
components around. However, arr is totally opaque to the arrow itself, and
prevents describing some very useful objects as arrows.

For example, I would love to be able to use the arrow syntax to define
objects of this type:

data Circuit a b where
Const :: Bool - Circuit () Bool
Wire :: Circuit a a
Delay :: Circuit a a
And :: Circuit (Bool,Bool) Bool
Or :: Circuit (Bool,Bool) Bool
Not :: Circuit Bool Bool
Then :: Circuit a b - Circuit b c - Circuit a c
Pair :: Circuit a c - Circuit b d - Circuit (a,b) (c,d)
First :: Circuit a b - Circuit (a,c) (b,c)
Swap :: Circuit (a,b) (b,a)
AssocL :: Circuit ((a,b),c) (a,(b,c))
AssocR :: Circuit (a,(b,c)) ((a,b),c)
Loop :: Circuit (a,b) (a,c) - Circuit b c
etc.

Then we can have code that examines this concrete data representation,
converts it to VHDL, optimizes it, etc.

However, due to the presence of the opaque 'arr', there's no way to make
this type an arrow without adding an 'escape hatch'
Arr :: (a - b) - Circuit a b
which breaks the abstraction: circuit is supposed to represent an actual
boolean circuit; (Arr not) is not a valid circuit because we've lost the
information about the existence of a 'Not' gate.

The arrow syntax translation uses arr to do plumbing of variables.  I think
a promising project would be to figure out exactly what plumbing is needed,
and add those functions to a sort of 'PrimitiveArrow' class.  All of these
plumbing functions are trivially implemented in terms of 'arr', when it
exists, but if it doesn't, it should be possible to use the arrow syntax
regardless.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] arr considered harmful

2011-10-31 Thread Ryan Ingram
This seems basically what I'm talking about, except even more hardcore.  I
think mostly what I'm suggesting is that the GHC arrow preprocessor to
compile to something like generalized arrows, by default, with current
Arrows as a special case.

  -- ryan

On Mon, Oct 31, 2011 at 5:48 PM, Felipe Almeida Lessa 
felipe.le...@gmail.com wrote:

 On Mon, Oct 31, 2011 at 10:33 PM, Ryan Ingram ryani.s...@gmail.com
 wrote:
  The arrow syntax translation uses arr to do plumbing of variables.  I
 think
  a promising project would be to figure out exactly what plumbing is
 needed,
  and add those functions to a sort of 'PrimitiveArrow' class.  All of
 these
  plumbing functions are trivially implemented in terms of 'arr', when it
  exists, but if it doesn't, it should be possible to use the arrow syntax
  regardless.

 There are already generalized arrows [1].  Is that what you are looking
 for?

 Cheers,

 [1] http://www.cs.berkeley.edu/~megacz/garrows/

 --
 Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy A-star search

2011-10-27 Thread Ryan Ingram
You're missing one of the key insights from A-star (and simple djikstra, for
that matter): once you visit a node, you don't have to visit it again.

Consider a 5x2 2d graph with these edge costs:

B 1 C 1 D 1 E 9 J
1   1   1   1   1
A 2 F 2 G 2 H 2 I

with the start node being A, the target node being J, and the heuristic
being manhattan distance.  Your search will always try to take the top
route, on every node along the bottom path, even though you visit every node
along the top route in your first try at reaching the goal.  You need a way
to mark that a node is visited and remove it from future consideration, or
else you're wasting work.

A-star will visit the nodes in the order ABCDE FGHIJ; your algorithm visits
the nodes in the order ABCDE FCDE GDE HE IJ.

  -- ryan

On Sat, Oct 22, 2011 at 5:28 AM, Anton Kholomiov
anton.kholom...@gmail.comwrote:

 Recently I was looking for an A-star search algorithm. I've found a
 package
 but I couldn't understand the code. Then I saw some blogposts but they
  were difficult to understand too. I thought about some easier solution
 that
 relies on laziness. And I've come to this:

 Heuristic search is like depth-first search but solutions in sub-trees
 are concatenated with mergeBy function, that concatenates two
 list by specific order:

 module Search where

 import Control.Applicative
 import Data.Function(on)
 import Control.Arrow(second)
 import Data.Tree

 -- | Heuristic search. Nodes are visited from smaller to greater.
 searchBy :: (a - a - Ordering) - Tree a - [a]
 searchBy  heur (Node v ts) =
 v : foldr (mergeBy heur) [] (searchBy heur $ ts)

 -- | Merge two lists. Elements concatenated in specified order.
 mergeBy :: (a - a - Ordering) - [a] - [a] - [a]
 mergeBy _ a []  = a
 mergeBy _ []b   = b
 mergeBy p (a:as)(b:bs)
 | a `p` b == LT= a : mergeBy p as (b:bs)
 | otherwise = b : mergeBy p bs (a:as)


 Now we can define specific heuristic search in terms of searchBy:

 -- | Heuristic is distance to goal.
 bestFirst :: Ord h = (a - h) - (a - [a]) - a - [a]
 bestFirst dist alts =
 searchBy (compare `on` dist) . unfoldTree (\a - (a, alts a))

 -- | A-star search.
 -- Heuristic is estimated length of whole path.
 astar :: (Ord h, Num h) = (a - h) - (a - [(a, h)]) - a - [a]
 astar dist alts s0 = fmap fst $
 searchBy (compare `on` astarDist) $ unfoldTree gen (s0, 0)
 where astarDist (a, d) = dist a + d
   gen (a, d)  = d `seq` ((a, d), second (+d) $ alts a)

 I'm wondering is it effective enough?


 Anton

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy A-star search

2011-10-27 Thread Ryan Ingram
Also, this wasn't clear in my message, but the edges in the graph only go
one way; towards the top/right; otherwise the best path is ABCDEHIJ :)

On Thu, Oct 27, 2011 at 10:48 AM, Ryan Ingram ryani.s...@gmail.com wrote:

 You're missing one of the key insights from A-star (and simple djikstra,
 for that matter): once you visit a node, you don't have to visit it again.

 Consider a 5x2 2d graph with these edge costs:

 B 1 C 1 D 1 E 9 J
 1   1   1   1   1
 A 2 F 2 G 2 H 2 I

 with the start node being A, the target node being J, and the heuristic
 being manhattan distance.  Your search will always try to take the top
 route, on every node along the bottom path, even though you visit every node
 along the top route in your first try at reaching the goal.  You need a way
 to mark that a node is visited and remove it from future consideration, or
 else you're wasting work.

 A-star will visit the nodes in the order ABCDE FGHIJ; your algorithm visits
 the nodes in the order ABCDE FCDE GDE HE IJ.

   -- ryan

 On Sat, Oct 22, 2011 at 5:28 AM, Anton Kholomiov 
 anton.kholom...@gmail.com wrote:

 Recently I was looking for an A-star search algorithm. I've found a
 package
 but I couldn't understand the code. Then I saw some blogposts but they
  were difficult to understand too. I thought about some easier solution
 that
 relies on laziness. And I've come to this:

 Heuristic search is like depth-first search but solutions in sub-trees
 are concatenated with mergeBy function, that concatenates two
 list by specific order:

 module Search where

 import Control.Applicative
 import Data.Function(on)
 import Control.Arrow(second)
 import Data.Tree

 -- | Heuristic search. Nodes are visited from smaller to greater.
 searchBy :: (a - a - Ordering) - Tree a - [a]
 searchBy  heur (Node v ts) =
 v : foldr (mergeBy heur) [] (searchBy heur $ ts)

 -- | Merge two lists. Elements concatenated in specified order.
 mergeBy :: (a - a - Ordering) - [a] - [a] - [a]
 mergeBy _ a []  = a
 mergeBy _ []b   = b
 mergeBy p (a:as)(b:bs)
 | a `p` b == LT= a : mergeBy p as (b:bs)
 | otherwise = b : mergeBy p bs (a:as)


 Now we can define specific heuristic search in terms of searchBy:

 -- | Heuristic is distance to goal.
 bestFirst :: Ord h = (a - h) - (a - [a]) - a - [a]
 bestFirst dist alts =
 searchBy (compare `on` dist) . unfoldTree (\a - (a, alts a))

 -- | A-star search.
 -- Heuristic is estimated length of whole path.
 astar :: (Ord h, Num h) = (a - h) - (a - [(a, h)]) - a - [a]
 astar dist alts s0 = fmap fst $
 searchBy (compare `on` astarDist) $ unfoldTree gen (s0, 0)
 where astarDist (a, d) = dist a + d
   gen (a, d)  = d `seq` ((a, d), second (+d) $ alts a)

 I'm wondering is it effective enough?


 Anton

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Efficient mutable arrays in STM

2011-10-27 Thread Ryan Ingram
On Tue, Oct 25, 2011 at 1:46 PM, Ben Franksen ben.frank...@online.dewrote:

  IME, there are (at least) two possible problems
  here, 1) transactions scale (quadratically, I think) with the number of
  TVars touched,

 Ouch! What would be the reason for that? I thought it would be linear... I
 mean what happens is that the transaction log gets built when the
 transaction runs, which should take a constant time per TVar, and then on
 commit we have to traverse the log, which is again linear in the number of
 TVars...


Your first assumption is incorrect.  Every time you access a TVar it needs
to check in the log if that TVar was already accessed in this transaction,
so that it can get/update the current value.  Right now the log is just a
list, so accessing a TVar takes O(number of TVars already accessed).

Consider this code:

f :: TVar Int - STM ()
f v = do
x - readTVar v
writeTVar v $! (x+1)

g :: Int - TVar Int - STM ()
g n v = mapM_ (\_ - f v) [1..n]

In order for f to get the current value of the TVar, it has to check if the
variable is in the log already; otherwise later calls to f will just see the
original value in memory and not repeatedly increment it.

As to your second assumption, it's been a while since I read the STM source,
so I can't comment there.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to implement a digital filter, using Arrows?

2011-10-18 Thread Ryan Ingram
Your type stopped being an arrow when the state type started to depend on
the input type:

Filter a b ~= (a, FS a) - (b, FS a)

Filter b c ~= (b, FS b) - (c, FS b)

It's impossible to compose these two functions into a single function of
type Filter a c, because the state type doesn't match.

You need to make the filter state not dependent on the input type:

newtype Filter s a b = F { runFilter :: (a, FilterState s) - (b,
FilterState s) }

You can still create objects with the type
   Filter a a b
which correspond to your old filter type.  But these functions will always
'start' a pipeline.  Which I think is what you want anyways!

  -- ryan


On Tue, Oct 18, 2011 at 2:35 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi John,
 Thanks for this reply:

 Date: Tue, 18 Oct 2011 14:05:22 +1030
 From: John Lask jvl...@hotmail.com
 Subject: Re: [Haskell-cafe] How to implement a digital filter, using
Arrows?
 To: haskell-cafe@haskell.org
 Message-ID: BLU0-
 smtp384394452fd2750fbe3bcfcc6...@phx.gbl
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed



 your function corresponds with Control.Arrow.Transformer.Automaton. If
 you frame your function is such most of your plumbing is taken care of.

 Following your advice, I arrived at:

   1 {-# LANGUAGE Arrows, GeneralizedNewtypeDeriving, FlexibleContexts #-}
   2
   3 module Filter (
   4 FilterState
   5   , Filter
   6   , applyFilter
   7   , convT
   8 ) where
   9
  10 import EitherT
  11 import Control.Monad
  12 import Control.Monad.State
  13 import Control.Arrow
  14 import Control.Arrow.Operations
  15 import Control.Arrow.Transformer
  16 import Control.Arrow.Transformer.All
  17 import Data.Stream as DS (fromList, toList)
  18
  19 -- tap weights, `as' and `bs', are being made part of the filter state,
 in
  20 -- order to accomodate adaptive filters (i.e. - DFEs).
  21 data FilterState a = FilterState {
  22 as   :: [a] -- transfer function denominator coefficients
  23   , bs   :: [a] -- transfer function numerator coefficients
  24   , taps :: [a] -- current delay tap stored values
  25   }
  26
  27 -- Future proofing the implementation, using the `newtype' trick.
  28 newtype Filter b c = F {
  29 runFilter :: (b, FilterState b) - (c, FilterState b)
  31   }
  32
  33 -- Time domain convolution filter (FIR or IIR),
  34 -- expressed in direct form 2
  35 convT :: (Num b) = Filter b b
  36 convT = F $ \(x, s) -
  37 let wk = (x - sum [a * t | (a, t) - zip (tail $ as s) (taps s)])
  38 newTaps = wk : ((reverse . tail . reverse) $ taps s)
  39 s' = s {taps = newTaps}
  40 y  = sum [b * w | (b, w) - zip (bs s) (wk : (taps s))]
  41 in (y, s')
  42
  43 -- Turn a filter into an Automaton, in order to use the built in
 plubming
  44 -- of Arrows to run the filter on an input.
  45 filterAuto :: (ArrowApply a) = Filter b c - FilterState b -
 Automaton a (e, b) c
  46 filterAuto f s = Automaton a where
  47 a = proc (e, x) - do
  48 (y, s') - arr (runFilter f) - (x, s)
  49 returnA - (y, filterAuto f s')
  50
  53 applyFilter :: Filter b c - FilterState b - [b] - ([c], FilterState
 b)
  54 applyFilter f s =
  55 let a = filterAuto f s
  56 in proc xs - do
  57 ys - runAutomaton a - ((), DS.fromList xs)
  58 s' - (|fetch|)
  59 returnA - (DS.toList ys, s')
  60

 which gave me this compile error:

 Filter.hs:58:16:
 Could not deduce (ArrowState (FilterState b) (-))
   from the context ()
   arising from a use of `fetch' at Filter.hs:58:16-20
 Possible fix:
   add (ArrowState (FilterState b) (-)) to the context of
 the type signature for `applyFilter'
   or add an instance declaration for
  (ArrowState (FilterState b) (-))
 In the expression: fetch
 In the expression:
 proc xs - do { ys - runAutomaton a - ((), fromList xs);
 s' - (|fetch |);
 returnA - (toList ys, s') }
 In the expression:
 let a = filterAuto f s
 in
   proc xs - do { ys - runAutomaton a - ((), fromList xs);
   s' - (|fetch |);
    }

 So, I made this change:

  51 applyFilter :: *(ArrowState (FilterState b) (-)) =* Filter b c -
 FilterState b - [b] -
  52 ([c], FilterState
 b)

 And that compiled. However, when I tried to test my new filter with:

  let s = FilterState [1,0,0] [0.7, 0.2, 0.1] [0, 0, 0]
  applyFilter convT s [1,0,0,0,0]

 I got:

 interactive:1:0:
 No instance for (ArrowState (FilterState Double) (-))
   arising from a use of `applyFilter' at interactive:1:0-30
 Possible fix:
   add an instance declaration for
   (ArrowState (FilterState Double) (-))
 In the expression: applyFilter convT s [1, 0, 0, 0, ]
 In the definition of `it': it = applyFilter convT s [1, 0, 0, ]

 I thought, maybe, I 

Re: [Haskell-cafe] Trouble using State Monad.

2011-10-11 Thread Ryan Ingram
Your filter type isn't a Monad.

In particular

bind :: (a - EitherT e (State FilterState) a) - (a - b - EitherT e
(State FilterState) b) - b - EitherT e (State FilterState) b

can't be implemented, as you have no place to grab an 'a' to pass to the
initial computation.

If you fix the input type, you can do

newtype Filter r e a = F {
runFilter :: r - EitherT e (State FilterState) a
}

which is isomorphic to

newtype Filter r e a = F {
runFilter :: ReaderT r (EitherT e (State FilterState)) a
}

which newtype deriving will be able to deal with easily.

  -- ryan


On Sat, Oct 8, 2011 at 4:28 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi all,

 I'm trying to use the State Monad to help implement a digital filter:

  17 newtype Filter e a = F {
  18 runFilter :: a - EitherT e (State FilterState) a
  19   } deriving (Monad, MonadState FilterState)

 but I'm getting these compiler errors:

 Filter.hs:19:14:
 Can't make a derived instance of `Monad (Filter e)'
   (even with cunning newtype deriving):
   cannot eta-reduce the representation type enough
 In the newtype declaration for `Filter'

 Filter.hs:19:21:
 Can't make a derived instance of
   `MonadState FilterState (Filter e)'
   (even with cunning newtype deriving):
   cannot eta-reduce the representation type enough
 In the newtype declaration for `Filter'

 If I change the code to this:

  17 newtype Filter e a = F {
 *  18 runFilter :: EitherT e (State FilterState) a
 ** * 19   } deriving (Monad, MonadState FilterState)

 it compiles, but I can't figure out how I'd feed the input to the filter,
 in that case.

 In comparing this to the tricks used in constructing the State Monad based
 version of the `Parser' type,
 I notice that Parser gets around this issue, by having the input (i.e. -
 input stream) be a part of the initial state,
 but I'm not sure that's appropriate for a digital filter.

 Thanks,
 -db

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Best bit LIST data structure

2011-10-11 Thread Ryan Ingram
On Sun, Oct 9, 2011 at 6:18 AM, Ryan Newton rrnew...@gmail.com wrote:


 Yep, it is simple.  But I prefer to only use well-tested data structure
 libraries where I can!  Here's an example simple implementation (partial --
 missing some common functions):


 module Data.BitList
   ( BitList
   , cons, head, tail, empty
   , pack, unpack, length, drop
   )
 where

 import Data.Int
 import Data.Bits
 import Prelude as P hiding (head,tail,drop,length)
 import qualified Data.List as L
 import Test.HUnit

 data BitList = One  {-# UNPACK #-} !Int {-# UNPACK #-} !Int64
  | More {-# UNPACK #-} !Int {-# UNPACK #-} !Int64 BitList


I suggest

data BitTail = Zero | More {-# UNPACK #-} !Int64 BitTail
data BitList = Head {-# UNPACK #-} !Int {-# UNPACK #-} !Int64 BitTail
empty = Head 0 0 Zero

or else just
data BitList = Head {-# UNPACK #-} !Int {-# UNPACK #-} !Int64 [Int64]
empty = Head 0 0 []
length (Head n _ xs) = n + 64 * List.length xs

unpack :: BitList - [Bool]
 unpack (One 0 _) = []
 unpack (One i bv)= (bv `testBit` (i-1)) : unpack (One (i-1) bv)
 unpack (More 0 _ r)  =  unpack r
 unpack (More i bv r) = (bv `testBit` (i-1)) : unpack (More (i-1) bv r)


I'd implement as

view :: BitList - Maybe (Bool, BitList)
view (One 0 _) = Nothing
view bl = Just (head bl, tail bl)

unpack = unfoldr view


 drop :: Int - BitList - BitList
 drop 0 bl   = bl
 drop n bl | n = 64 = case bl of
 One _ _- error drop: not enough elements in BitList
 More i _ r - drop (n-i) r
 drop n bl = case bl of
   One i  bv   - One  (i-n) bv
   More i bv r - More (i-n) bv r


This is wrong.

drop 5 (More 1 0 (One 64 0))
-
More (-4) 0 (One 64 0)

Fixed version (also gives same behavior as List.drop when n  length l)

drop :: Int - BitList - BitList
drop n (One i bv)
   | n = i = empty
   | otherwise = One (i - n) bv
drop n (More i bv r)
   | n = i = drop (n - i) r
   | otherwise = More (i - n) bv r

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] mapM with Traversables

2011-09-30 Thread Ryan Ingram
You can use Data.Sequence.fromList to go [a] - Seq a, though.

So given

f :: Monad m = a - m b

you have

import Data.Traversable as T
import Data.Sequence as S

g :: Monad m = [a] - m (S.Seq b)
g = T.mapM f . S.fromList

  - ryan

On Wed, Sep 28, 2011 at 6:20 PM, Marc Ziegert co...@gmx.de wrote:

 Hi Thomas,
 this should be on the haskell-cafe or haskell-beginners mailing list.
 Haskell@... is mainly for announcements.


 You have:
  f :: Monad m =
   a - m b
  Data.Traversable.mapM :: (Monad m, Traversable t) =
   (a - m b) - t a - m (t b)

 So, if you define g with
  g a = do Data.Traversable.mapM f a
  or in short
  g = Data.Traversable.mapM f
 , then the type will be
  g :: (Monad m, Traversable t) =
   t a - m (t b)
 instead of
  g :: [a] - m (Seq b)
 .

 Try using ghci to find these things out. It helps to get not confused with
 the types.


 Besides the missing Monad context, g misses a generic way to convert
 between different Traversables, which does not exist. You can only convert
 from any Traversable (imagine a Tree) toList; not all Traversables have a
 fromList function.
 For conversion, you might want to use Foldable and Monoid, fold to untangle
 and mappend to recombine; but any specific fromList function will surely
  be more efficient.

 Regards
 - Marc



  Original-Nachricht 
  Datum: Wed, 28 Sep 2011 17:27:58 -0600
  Von: thomas burt thedwa...@gmail.com
  An: hask...@haskell.org
  Betreff: [Haskell] mapM with Traversables

  Hi -
 
  I have a function, f :: Monad m = a - m b, as well as a list of a's.
  I'd
  like to produce a sequence (Data.Sequence) of b's, given the a's:
 
  g :: [a] - m (Seq b)
  g a = do Data.Traversable.mapM f a   -- type error!
 
  I see that Data.Traversable.mapM f a doesn't work... is this like
 asking
  the compiler to infer the cons/append operation from the type signature
 of
  g?
 
  Do I need to write my own function that explicitly calls the append
  functions from Data.Sequence or can I do something else that would work
  for
  any g :: Traversable t, Traversable u = t a - m (u b) given f :: a
 -
  m
  b?
 
  Thanks for any comments!
  Thomas

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Much faster complex monad stack based on CPS state

2011-09-27 Thread Ryan Ingram
My guess is that Cont plays really nicely with GHC's inliner, so things that
end up looking like

 return x = \y - ...

get optimized really well

return x = f
-- inline =
= ContState $ \s0 k - runCS (return x) s0 $ \a s1 - runCS (f a) s1 k
-- inline return
= ContState $ \s0 k - runCS (ContState $ \s2 k2 - k2 x s2) s0 $ \a s1
- runCS (f a) s1 k
-- runCS record selector
= ContState $ \s0 k - (\s2 k2 - k2 x s2) s0 $ \a s1 - runCS (f a) s1
k
-- beta
= ContState $ \s0 k - (\k2 - k2 x s0) $ \a s1 - runCS (f a) s1 k
-- beta
= ContState $ \s0 k - (\a s1 - runCS (f a) s1 k) x s0
-- beta
= ContState $ \s0 k - runCS (f x) s0 k

and then further inlining of f can take place.

On Mon, Sep 26, 2011 at 4:07 PM, Nicu Ionita nicu.ion...@acons.at wrote:

 Hello list,

 Starting from this emails (http://web.archiveorange.com/**
 archive/v/nDNOvSM4JT3GJRSjOm9Phttp://web.archiveorange.com/archive/v/nDNOvSM4JT3GJRSjOm9P
 **) I could refactor my code (a UCI chess engine, with complex functions,
 in which the search has a complex monad stack) to run twice as fast as with
 even some hand unroled state transformer! So from 23-24 kilo nodes per
 second it does now 45 to 50 kNps! And it looks like there is still some
 improvement room (I have to play a little bit with strictness annotations
 and so on).

 (Previously I tried specializations, then I removed a lot of polimorphism,
 but nothing helped, it was like hitting a wall.)

 Even more amazingly is that I could program it although I cannot really
 understand the Cont  ContT, but just taking the code example from Ryan
 Ingram (newtype ContState r s a = ...) and looking a bit at the code from
 ContT (from the transformers library), and after fixing some compilation
 errors, it worked and was so fast.

 I wonder why the transformers library does not use this kind of state monad
 definition. Or does it, and what I got is just because of the unrolling? Are
 there monad (transformers) libraries which are faster? I saw the library
 kan-extensions but I did not understand (yet) how to use it.

 Nicu

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Configuration Problem and Plugins

2011-09-06 Thread Ryan Ingram
The other option is

{-# LANGUAGE ExistentialQuantification #-}

data Renderer s = Renderer {
initialize :: IO s,
destroy :: IO (),
renderS :: SystemOutput - s - IO s
 }

-- Now, you need to hold the state somewhere, which you can do with an
existential:

data InitializedRenderer = forall s. IRenderer s (Renderer s)

initRenderer :: Renderer s - IO InitializedRenderer
initRenderer r = do
s - initialize r
return (IRenderer s r)

render :: InitializedRenderer - SystemOutput - IO InitializedRenderer
render (IRenderer s r) o = do
   s' - renderS r o s
   return (IRenderer s' r)




On Sat, Sep 3, 2011 at 10:44 PM, M. George Hansen
technopolit...@gmail.comwrote:

 On Sat, Sep 3, 2011 at 12:33 AM, Max Rabkin max.rab...@gmail.com wrote:
  On Sat, Sep 3, 2011 at 03:15, M. George Hansen technopolit...@gmail.com
 wrote:
  Greetings,
 
  I'm a Python programmer who is relatively new to Haskell, so go easy on
 me :)
 
  I have a program that uses (or will use) plugins to render output to
  the user in a generic way. I'm basing the design of the plugin
  infrastructure on the Plugins library, and have the following
  interface:
 
  data Renderer = Renderer {
  initialize :: IO (),
  destroy :: IO (),
 render :: SystemOutput - IO ()
  }
 
  How about having initialize return the render (and destroy, if
  necessary) functions:
 
  initialize :: IO (SystemOutput - IO ())
 
  or
 
  initialize :: IO (SystemOutput - IO (), IO())
 

 Thanks for your reply. That does seem like the best solution, I'll
 give it a try.

 --
   M. George Hansen

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pointed, but not Applicative

2011-09-01 Thread Ryan Ingram
On Tue, Aug 30, 2011 at 4:53 PM, Sebastian Fischer fisc...@nii.ac.jpwrote:

 I think the idea of functional lists is that the monoids of 'lists'
 and 'functions on lists' are isomorphic with isomorphisms toFList and
 toList:

toFList [] = id
toFList (xs++ys) = toFList xs . toFList ys

toList id = []
toList (f . g) = toList f ++ toList g


Oh absolutely, but my point (if you will pardon the pun), was that just
given the type

newtype FList a = FL ([a] - [a])
runFList (FL f) = f

and the law

runFList fl as = runFList fl [] ++ as

we can prove that

fmap f fl = FL $ \bs - map f (runFList fl []) ++ bs

is a valid functor instance:

fmap id
(eta expand) = \fl - fmap id fl
(apply fmap) = \fl - FL $ \bs - map id (runFList fl []) ++ bs
(map law) = \fl - FL $ \bs - id (runFList fl []) ++ bs
(apply id) = \fl - FL $ \bs - runFList fl [] ++ bs
(FList law) = \fl - FL $ \bs - runFList fl bs
(eta reduce) = \fl - FL $ runFList fl
(constructor of destructor) = \fl - fl
(unapply id) = \fl - id fl
(eta reduce) = id

We don't need to know that FList is supposed to represent an isomorphism
to/from lists, although you can derive one, as you've shown.  I just wanted
to show that it's a valid functor, but only if you assume an extra law on
the type.  The functor instance depends critically on converting back to a
list which requires that law.

There's no functor instance for this type that doesn't convert back to a
list, which is unfortunate, because you lose the performance benefits of
constant-time append!

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-30 Thread Ryan Ingram
On Tue, Aug 30, 2011 at 9:42 AM, Conal Elliott co...@conal.net wrote:
 I suspect this definition is what Sebastian meant by converting back and
forth to ordinary lists.

Yep, I know; and technically it violates 'fmap id' == 'id'

for example,
fmap id (FList $ \xs - xs ++ xs) = FList $ \xs - xs

If you add this FList law, though, you're OK:

runFList fl as = runFList fl [] ++ as

But, yes, this definition of fmap converts back to an ordinary list
representation.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pointed, but not Applicative

2011-08-29 Thread Ryan Ingram
On Sun, Aug 28, 2011 at 8:24 PM, Maciej Marcin Piechotka 
uzytkown...@gmail.com wrote:

 f `fmap` FList g = _|_
 f `fmap` FList g = map id
 f `fmap` FList g = map _|_
 (+ variation of _|_*)


f `fmap` FList g = \bs - map f (g []) ++ bs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unexpected Typeable behaviour; Prelude.undefined

2011-08-29 Thread Ryan Ingram
The problem with hiding the context in the constructor is that there's no
guarantee that the context actually exists in the first place; for example,
given this type

data IsInt a where
Proof :: IsInt Int

this is a legal program:

foo :: IsInt Bool
foo = undefined

That said, you are still just fine to hide the context in the constructor at
the call site:

data MyGADT m where
MonadAction :: (Typeable1 m, Monad m) - m () - MyGADT m

instance (Typeable1 m, Monad m) = Typeable (MyGADT m) where
typeof t = ...

getTypeRep :: MyGADT m - TypeRep
getTypeRep x@(MonadAction _) = typeof (undefined `asTypeOf` x)

Here we unpack the context from x and use it to construct the 'typeof'
function for MyGADT m.

  -- ryan

On Mon, Aug 29, 2011 at 2:06 AM, Philip Holzenspies
p...@st-andrews.ac.ukwrote:

  Dear Brandon, Ozgur, et al,

  Thanks very much for you explanation. This seems to be a perfectly
 reasonable explanation; the wrapper-types I used probably explicitly invoke
 typeOf with undefined. The problem here, however, is that in my actual
 program, I don't use ADTs, but I use GADTs, so as to carry the context
 (Monad, Typeable1) with the constructor. To get to this context, I must
 pattern-match with the constructor. It seems hiding contexts (which I really
 like about GADTs) isn't available consistently. Oh well ;)

  Regards,
 Philip


  On 29 Aug 2011, at 01:20, Brandon Allbery wrote:

  On Sun, Aug 28, 2011 at 18:44, Philip Holzenspies 
 p...@st-andrews.ac.ukwrote:

 instance (Typeable1 m, Monad m) = Typeable (MyADT m) where
typeOf t@(MyADT _)


  typeOf is usually invoked with an undefined parameter; it should use
 types, never values.  Here you've defined it to deconstruct what it's
 passed, which means that anything that uses it in the usual way (`typeOf
 (undefined :: someType)') will immediately throw undefined.

  You don't need a deconstructor there; you (correctly) throw away the
 value, and it doesn't provide any type information not already available
 from the instance declaration.  `typeOf t' should be good enough.

  --
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Existential question

2011-08-19 Thread Ryan Ingram
On Wed, Aug 17, 2011 at 4:49 PM, Tom Schouten t...@zwizwa.be wrote:

 {-# LANGUAGE ExistentialQuantification #-}

 -- Dear Cafe, this one works.
 data Kl' s i o = Kl' (i - s - (s, o))
 iso' :: (i - Kl' s () o) - Kl' s i o
 iso' f = Kl' $ \i s - (\(Kl' kl') - kl' () s) (f i)

 -- Is there a way to make this one work also?
 data Kl i o = forall s. Kl (i - s - (s, o))
 iso :: (i - Kl () o) - Kl i o
 iso f = Kl $ \i s - (\(Kl kl) - kl () s) (f i)


Not without moving the quantifier, as Oleg says.  Here is why:

Existential types are equivalent to packing up a type with the constructor;
imagine KI as

data KI i o = KI @s (i - s - (s,o))   -- not legal haskell

where @s represents hold a type s which can be used in the other elements
of the structure.  An example element of this type:

  KI @[Int] (\i s - (i:s, sum s)) :: KI Int Int

Trying to implement iso as you suggest, we end up with

iso f = KI ?? (\i s - case f i of ...)

What do we put in the ??.  In fact, it's not possible to find a solution;
consider this:

ki1 :: KI () Int
ki1 = KI @Int (\() s - (s+1, s))

ki2 :: KI () Int
ki2 = KI @() (\() () - ((), 0))

f :: Bool - KI () Int
f x = if x then ki1 else ki2

iso f = KI ?? ??

The problem is that we have multiple possible internal state types!

  -- ryan


 {-
Couldn't match type `s0' with `s'
  because type variable `s' would escape its scope
This (rigid, skolem) type variable is bound by
  a pattern with constructor
Kl :: forall i o s. (i - s - (s, o)) - Kl i o,
  in a lambda abstraction
The following variables have types that mention s0
  s :: s0 (bound at /home/tom/meta/haskell/iso.hs:**11:17)
In the pattern: Kl kl
In the expression: \ (Kl kl) - kl () s
In the expression: (\ (Kl kl) - kl () s) (f i)
 -}


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] difference between class context and deriving

2011-08-02 Thread Ryan Ingram
On Tue, Aug 2, 2011 at 5:57 AM, Patrick Browne patrick.bro...@dit.iewrote:

 data Eq a = Set1 a = NilSet1 | ConsSet1 a (Set1 a)
 data Set2 a = NilSet2 | ConsSet2 a (Set2 a) deriving Eq


The former declaration is a language wart, IMO.  All it does is attach a
restriction to the constructors of Set1; try

 :t NilSet1
NilSet1 :: Eq a = Set1 a
 :t NilSet2
NilSet2 :: Set2 a

But it doesn't give you that restriction back when you use it:

 let f (ConsSet1 v _) = v == v
 :t f
f :: Eq a = Set1 a - Bool

You'd think that since you had to provide the evidence (Eq a) when you
constructed ConsSet1, that it'd be available, but it's not.

-- Seems to have same type
 :t ConsSet2 1 NilSet2
 ConsSet2 1 NilSet2 :: forall t. (Num t) = Set2 t
 :t ConsSet1 1 NilSet1
 ConsSet1 1 NilSet1 :: forall t. (Num t) = Set1 t


Remember that Eq is a superclass of Num.

 let f1 x = ConsSet1 x NilSet1
 let f2 x = ConsSet2 x NilSet2
 :t f1
f1 :: Eq a = a - Set1 a
f2 :: a - Set2 a

'deriving Eq' on the definition of Set2 makes the instance  'instance Eq a
= Eq (Set2 a)'.  So you can construct Set2 at any type, and when the
underlying type 'a' has equality, then 'Set2 a' does as well.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Retaining functions in memory

2011-07-27 Thread Ryan Ingram
Use memoization.  Here's an example:

cabal-install MemoTrie


import Data.MemoTrie

fib_fix :: (Integer - Integer) - Integer - Integer
fib_fix _ n | n  0 = error invalid input
fib_fix _ 0 = 1
fib_fix _ 1 = 1
fib_fix rec n = rec (n-1) + rec (n-2)

-- 'tie the knot' on a recusrive function
func_fix :: ((a - b) - (a - b)) - (a - b)
func_fix f = let rec = f rec in rec

-- memoized knot tying; 'memo' stops us from recomputing the same value more
than once.
memo_fix :: HasTrie a = ((a - b) - (a - b)) - (a - b)
memo_fix f = let rec = memo (f rec) in rec

-- try comparing the performance of these two on large arguments
fib_slow, fib_fast :: Integer - Integer
fib_slow = func_fix fib_fix
fib_fast = memo_fix fib_fix


On Tue, Jul 26, 2011 at 2:53 AM, Siddhartha Gadgil 
siddhartha.gad...@gmail.com wrote:

   I have been making programs for mathematical applications
 (low-dimensional topology) in Haskell, which I find a delight to code
 in. However, the execution is slow, and this seems to be because
 recursively defined functions seem to be recomputed. For example
 f(100) needs f(15) which needs f(7) ... The dependencies are not
 obvious to the compiler.
   I was looking for a way to retain the values of a specific
 function in memory. Is there some way to do this.
 Thanks,
 Siddhartha

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why the reluctance to introduce the Functor requirement on Monad?

2011-07-25 Thread Ryan Ingram
My guess is that nobody has put forward a clear enough design that solves
all the problems.  In particular, orphan instances are tricky.

Here's an example:

module Prelude where

class (Functor m, Applicative m) = Monad m where
return :: a - m a
(=) :: m a - (a - m b) - m b
() :: m a - m b - m b
a  b = a = const b

pure = return
(*) = ap
fmap = liftM

module X where
data X a = ...

module Y where
instance Functor X where fmap = ...

module Z where
instance Monad X where
return = ...
(=) = ...
-- default implementation of fmap brought in from Monad definition

module Main where
import X
import Z

foo :: X Int
foo = ...

bar :: X Int
bar = fmap (+1) foo  -- which implementation of fmap is used?  The one from
Y?


  -- ryan


On Sun, Jul 24, 2011 at 8:55 PM, Ivan Lazar Miljenovic 
ivan.miljeno...@gmail.com wrote:

 On 25 July 2011 13:50, Sebastien Zany sebast...@chaoticresearch.com
 wrote:
  I was thinking the reverse. We can already give default implementations
 of class operations that can be overridden by giving them explicitly when we
 declare instances, so why shouldn't we be able to give default
 implementations of operations of more general classes, which could be
 overridden by a separate instance declaration for these?
 
  Then I could say something like a monad is also automatically a functor
 with fmap by default given by... and if I wanted to give a more efficient
 fmap for a particular monad I would just instantiate it as a functor
 explicitly.

 I believe this has been proposed before, but a major problem is that
 you cannot do such overriding.

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call for GUI examples - Functional Reactive Programming

2011-07-09 Thread Ryan Ingram
On Thu, Jul 7, 2011 at 11:08 PM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Do you know any *small GUI programs* that you would *like* to see
 *implemented with Functional Reactive Programming?*


I think this is an admirable effort.  My suggestions are 'Stuff that came
with Windows since forever', because those programs were written for
basically the same reasons.

So, off the top of my head:

A simple calculator
Minesweeper
Notepad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange context reduction with partial application and casting

2011-07-03 Thread Ryan Ingram
On Sun, Jul 3, 2011 at 2:05 AM, Daniel Fischer 
daniel.is.fisc...@googlemail.com wrote:

 But as I understand it, the concern is ghci, where truly local bindings are
 probably rare and type signatures are commonly omitted.
 So putting :s -XNoMonomorphismRestriction in the .ghci file probably
 prevents more confusion and inconvenience than it causes.


Especially given GHCi's extended defaulting rules which makes g = f 1 not
ambiguous.  [It sets a1 to ()]

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing the empty list.

2011-07-01 Thread Ryan Ingram
Figuring out how to tell what type ghci is defaulting to was an interesting
exercise.  The sum [] trick seemed cool, so I tried a variant:

Prelude let f xs = const xs $ show xs
Prelude f []
[]
Prelude :t it
it :: [()]

  -- ryan


On Thu, Jun 30, 2011 at 6:44 PM, Ivan Lazar Miljenovic 
ivan.miljeno...@gmail.com wrote:

 On 1 July 2011 11:35, Brent Yorgey byor...@seas.upenn.edu wrote:
  On Fri, Jul 01, 2011 at 09:05:05AM +1000, Ivan Lazar Miljenovic wrote:
  On 1 July 2011 08:58, Joshua Ball joshbb...@gmail.com wrote:
   GHCi seems to be clever about some things:
  
   If I try to print the empty list in ghci, I encounter no problems:
  
   Prelude []
   []
   Prelude show []
   []
   Prelude print []
   []
  
   Even though the type of the list is clearly unknown, it must be
   picking SOME type. (why does it print [] instead of )?
 
  Type defaulting: if you don't specify a type, then ghci makes it
  [Integer].
 
  In this case I'm pretty sure it is [()] since there is only a Show
  constraint.  If there were a Num constraint it would pick
  Integer.

 Yeah, I forgot about ()

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible bug in GHC 7.0.3

2011-06-28 Thread Ryan Ingram
So this is definitely a GHC bug, but I think the problem is probably
triggered by this line:

instance  Serializable a b = IResource a

I don't think this is a valid instance declaration without a functional
dependency on Serializable, as it's impossible to know which type 'b' to use
in the methods of IResource.

  -- ryan

On Tue, Jun 28, 2011 at 3:43 AM, Alberto G. Corona agocor...@gmail.comwrote:

 I have an 'impossible' happened error.

 The code may look a little bit convoluted but it is part of my real code.:


 - begin of code--

 {-# LANGUAGE   FlexibleInstances, UndecidableInstances
, MultiParamTypeClasses
 #-}

 class Serializable a b


 class IResource a --The rest of the instance definitions does not
 matter for the error


 instance  Serializable a b = IResource a

 data DBRef a=  DBRef String   a


 instance  (IResource a) = Read (DBRef a)


 data   Votation a= Votation{
   content :: DBRef a
 } deriving (Read)

 --- end of code ---

 gives the following error at compilation time:

 testsrunghc impossiblelloop.hs
 ghc: panic! (the 'impossible' happened)
   (GHC version 7.0.3 for i386-unknown-mingw32):
 solveDerivEqns: probable loop
 (impossiblelloop.hs:20:13-16 main:Main.$fReadVotation{v rhI} [a{tv abB}
 [tv]
 ] base:GHC.Read.Read{tc 2d} [main:Main.Votation{tc rbo}

a{tv abB} [tv]] = [base:GHC.Read.Read{tc 2d}

 (main:Main.DBRef{tc
 rbu}

a{tv abB} [tv])])
 [[main:Main.Serializable{tc rbA} a{tv abB} [tv] b{tv ajE} [tcs]]]

 Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are casts required?

2011-06-06 Thread Ryan Ingram
I always forget to reply all.  Silly gmail.

On Mon, Jun 6, 2011 at 2:07 AM, Ryan Ingram ryani.s...@gmail.com wrote:

 Hi Pat.  There aren't any casts in that code.  There are type annotations,
 but this is different than the idea of a cast like in C.

 For example
 ((3 :: Integer) :: Int)
 is a compile error.

 What you are seeing is that 3 has the type (forall a. Num a = a); that is,
 the literal '3' gets converted by the compiler into

 fromInteger (I# 3#)

 where 3# represents the machine word '3' and I# is the internal constructor
 Word# - Integer.

 class Num a where
 ...
 fromInteger :: Integer - a

 So by 'casting', or rather, providing a type annotation, you are specifying
 what instance of Num gets the call to 'fromInteger'.

 As to whether you *need* a type annotation: it depends.  For example:
 foo () = sameId newId 3
 the compiler will infer the type of 'foo' to be
 foo :: forall a. IDs a = () - a

 If you declare foo as a value, though, you run into the dreaded
 monomorphism restriction, and you might get a complaint from the compiler
 about ambiguity.
 foo2 = sameId newId 3


 The monomorphism restriction forces values to be values; otherwise consider
 this


 -- the usual 'expensive' computation
 fib :: Num a = a - a
 fib 0 = 1
 fib n = fib (n-1) + fib (n-2)

 x = fib 10

 What's the type of x?  Most generally, it's
 x :: Num a = a

 But this means that x will be recalculated every time it's used; the value
 can't be saved since x doesn't represent a single value but rather a
 separate value for each instance of Num.  You are allowed to manually
 specify this type, but without it, the compiler says 'You meant this to be a
 value!' and forces it to a particular type if it can, or complains about
 ambiguity if it can't.  As to how it does so, look up the rules for
 defaulting and monomorphism in the Haskell report.

   -- ryan



 On Mon, Jun 6, 2011 at 12:45 AM, Patrick Browne patrick.bro...@dit.iewrote:

 Are casts required to run the code below?
 If so why?
 Thanks,
 Pat


 -- Idetifiers for objects
 class (Integral i) = IDs i where
  startId :: i
  newId :: i - i
  newId i = succ i
  sameId, notSameId :: i - i - Bool
 -- Assertion is not easily expressible in Haskell
 -- notSameId i newId i  = True
  sameId i j = i == j
  notSameId i j = not (sameId i j)
  startId = 1


 instance IDs Integer where



 -- are casts need here?
 sameId (newId startId::Integer) 3
 sameId (3::Integer) (4::Integer)
 notSameId (3::Integer) (newId (3::Integer))

 This message has been scanned for content and viruses by the DIT
 Information Services E-Mail Scanning Service, and is believed to be clean.
 http://www.dit.ie

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sub class and model expansion

2011-05-31 Thread Ryan Ingram
On Tue, May 31, 2011 at 1:40 AM, Patrick Browne patrick.bro...@dit.iewrote:

 Continuing the thread on model expansion.
 I have changed the example trying to focus on expanding models of M in G
 Why is the operation ! ok or RHS but not visible on LHS of G?
 The equation itself does not seem to suffer from the dependent type
 problem of my previous post.

 class M a where
  (!)  ::  a - a - a
  e :: a


 class M a = G a where
  (!-)  ::  a - a - a
 -- OK in G
  a !- e = e ! a


This doesn't do what you think.  This is equivalent to

(!-) = (\a e - e ! a)

That is, e is lambda-bound here, not the same e from M.  In this case
you've defined (!-) as flip (!)

When you define functions in a class declaration, you are just defining a
default implementation.  Generally this is used when you can implement some
functions in terms of the others, such as:

class Eq a where
   (==) :: a - a - a
   (/=) :: a - a - a

   a == b = not (a /= b)
   a /= b = not (a == b)

Now someone making an instance of this class need only define (==) or (/=);
the other one will be defined via the default instance.  (This has the
somewhat undesirable property that 'instance Eq X where' with no methods is
a valid instance declaration but in that instance == and /= are infinite
loops)

Maybe this will help you think about this: what code do you expect to write
for instances of this class?

instance M Int where
  e = 0
  (!) = +

instance G Int where
  -- do I need to write any code here?

It seems to me you are expecting the compiler to automatically derive the
definition 'inverse = negate'; there's no general way for the compiler to do
so, since it doesn't know the structure of your type.

Functions in Haskell, unlike, say, Prolog, only go from left of the = to the
right of the =.  Thanks to referential transparency, you can go backwards
from the point of view of proving properties about your code, but during
evaluation term rewriting only happens from left to right.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sub class and model expansion

2011-05-29 Thread Ryan Ingram
Hi Patrick.

What you are doing isn't possible in source code (Haskell doesn't prove
things at the value level like a dependently typed language does.)

Usually you document it just as you have, as a comment

-- inverse a ! a = e

You can also specify a QuickCheck property:

propInverse :: (Eq a, Group a) = a - Bool
propInverse a = (inverse a ! a) == e

I've put up an example on hpaste at http://hpaste.org/47234/simple_monoid

  -- ryan

On Sat, May 28, 2011 at 2:46 AM, Patrick Browne patrick.bro...@dit.iewrote:

 Hi,
 I'm not sure if this is possible but I am trying to use Haskell’s type
 class to specify  *model expansion*. Model expansion allows the
 specification of new symbols (known as enrichment) or the specification
 further properties that should hold on old symbols. I am trying to
 enrich simplified Monoids to Groups. The code below is not intended to
 do anything other than to specify simplified Groups as a subclass of
 Monoid. The main problem is that I cannot use ! on the LHS of equations
 in Group.

 Is it possible to expand the specification of Monoid to Group using
 Haskell type classes?

 Pat



 data M = M deriving Show

 -- Monoid with one operation (!) and identity e (no associativity)
 class Monoid  a where
  (!)  ::  a - a - a
  e :: a
  a ! e = a


 -- A  Group is a Monoid with an inverse, every Group is a Monoid
 -- (a necessary condition in logic: Group implies Monoid)
 -- The inverse property could be expressed as:
 --  1) forAll a thereExists b such that a ! b = e
 --  2)  (inv a) ! a  =  e
 class Monoid a = Group a where
  inverse  ::  a - a
 -- Haskell does not like  ! on the LHS of equation for inverse in Group.
 --  (inverse a) ! a  = e

 This message has been scanned for content and viruses by the DIT
 Information Services E-Mail Scanning Service, and is believed to be clean.
 http://www.dit.ie

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] State Machine and the Abstractions

2011-05-29 Thread Ryan Ingram
I suggest you take a look at MonadPrompt and/or Operational (two competing
packages, one of which I wrote).

And yes, you probably need some operation

Concurrent :: [Mission ()] - Mission ()
or
Interrupt :: Mission () - Mission Bool - Mission () - Mission ()

which runs its first argument until the second argument returns True, then
switches.

On Fri, May 27, 2011 at 12:06 PM, Yves Parès limestr...@gmail.com wrote:

 Hello,

 For the purposes of a simple strategy game, I'd like to build an EDSL that
 expresses missions. A mission could be represented as a state machine.
 With basic bricks such as actions (MoveTo, ShootAt...) or tests
 (EnemiesAround, LowHealth...), I could (ideally dynamically) build some
 strategic behaviors for the units.
 I will take the example of a patrol. Applied to a unit (or a group of
 units), it dictates : go from point 1 to point 2 and then go back and
 repeat. But when you detect an enemy near, leave the patrol path, destroy it
 and then resume your patrol where you left it.

 So if I consider my mission as a monad:
 data Mission = MoveTo Point | ShootAt Unit

 patrol = do
 MoveTo point1
 MoveTo point2
 patrol

 So far so good, but there, the only advantage to use a monad instead of a
 list of MoveTo's is the do-notation.
 And I lack the expression of tests. Using a GADT it could be:

 data Mission a where
 MoveTo :: Point - Mission ()
 ShootAt :: Unit - Mission Bool  -- If we have destroyed it or not
 EnemiesAround :: Mission [Unit]  -- The enemies that are maybe in sight
 LowHealth :: Mission Bool -- If I should retreat
 ...

 -- (Monad Mission could be nicely expressed using Heinrich Apfelmus' *
 operational* package)

 patrol = do
 MoveTo point1
 MoveTo point2
 enemies - EnemiesAround
 mapM_ ShootAt enemies
 patrol

 Aaaand... here comes the trouble: the actions are done *sequentially*.
 My units will move and then look at enemies, they will not monitor their
 environment while they move.
 So I need a way to say: A is your action of patrolling. B is your action of
 surveillance. Do both in parallel, but B is preponderant, as if it successes
 (if enemies are there) it takes over A. So, it is as if I was running two
 state machines in parallel.
 Moreover, the last line (the recursive call to patrol) is wrong, as it will
 restart the patrol from the beginning, and not from where it has been left.
 But this could be corrected by addind a test like which point is the
 closest.

 So I thought about Arrows, as they can express sequential and parallel
 actions, but I don't know if it would be a right way to model the
 interruptions/recoveries.
 What do you think about it? Do you know of similar situations and of the
 way they've been solved?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reverse Show instance

2011-05-19 Thread Ryan Ingram
Think of it this way:

-- Here is some data representing the typeclass 'Show'
data ShowDict a = ShowD (a - String)
show :: ShowDict a - a - String
show (ShowD f) a = f a

-- Here's a sample implementation for Strings
showString :: ShowDict String
showString = ShowD (\s - \ ++ escape s ++ \) where
   escape = concatMap escapeChar
   escapeChar '\\' = 
   escapeChar '' = \\\
   escapeChar c = [c]

-- Here's an implementation for pairs that uses the implementation for each
piece of the pair
showPair :: ShowDict a - ShowDict b - ShowDict (a,b)
showPair (ShowD sa) (ShowD sb) = ShowD (\(a,b) - ( ++ sa a ++ ,  ++ sb
b ++ ))

-- Here is what you are asking for
implementMe :: ShowDict (a,b) - ShowDict a
implementMe  = 


On Thu, May 19, 2011 at 2:08 PM, Andrew Coppin
andrewcop...@btinternet.comwrote:

  Cannot deduce (Show x) from context (Show (x, y)).
  Cannot deduce (Show y) from context (Show (x, y)).

 Um... seriously?

 From Prelude, we have

  Show x, Show y = Show (x, y)

 So clearly it works in the forward direction. But apparently not in the
 reverse direction.

 Is this a bug or a feature? (I.e., is there some obscure possibility I
 haven't thought of which means that doing the reverse inference would be
 incorrect?)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is fusion overrated?

2011-05-18 Thread Ryan Ingram
Yes, the goal isn't so much to improve complexity (both are O(1)) but to
reduce the constant factor on that O(1).

In an inner loop like that, allocator/gc calls by far dominate the cost of
the program.  If you can remove them, you've improved the performance of the
program by 10-100x.

In the case where everything is Int, you can even unbox and get entirely in
registers, which gives you comparable performance to a hand-tuned C or
assembly language loop.

  -- ryan

On Tue, May 17, 2011 at 10:55 PM, Roman Cheplyaka r...@ro-che.info wrote:

 If one thinks about Haskell data structures as of ordinary data
 structures, fusion seems a great deal -- instead of producing
 intermediate lists and possibly running out of memory, we just run a
 loop and use constant amount of space.

 But Haskell data structures are quite different -- they are produced as
 demanded. Consider the example from the Stream Fusion paper[1]:

f :: Int → Int
f n = sum [ k ∗ m | k ← [1..n], m ← [1..k ] ]

 Assuming the sum is a strict left fold, it consumes elements of lists
 one-by-one and runs in constant space.

 The list part can be transformed to

foldr (++) [] $ map (\k - map (\m - k*m) [1..k]) [1..n]

 which is capable of producing elements one-by-one. So the whole thing
 probably should run in constant space as well.

 Of course I don't claim that fusion is useless -- just trying to
 understand the problem it solves. Are we saving a few closures and cons
 cells here?

 [1] Stream Fusion. From Lists to Streams to Nothing at All.
Duncan Coutts, Roman Leshchinskiy, Don Stewart.

 --
 Roman I. Cheplyaka :: http://ro-che.info/
 Don't worry what people think, they don't do it very often.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-class conditional behavior

2011-05-08 Thread Ryan Ingram
The behavior you are asking for maybeShow violates parametricity, so it
can't exist without some sort of typeclass constraint.

That said, in your particular situation, it's an interesting question.

The Show instance for Either is

instance (Show a, Show b) = Show (Either a b) where ...

so we as programmers know that, given some instance Show (Either a b) that
there must be an instance for a.  But we can't get at it!

Inside the compiler, this instance looks something like this:

data ShowDict a = ShowDict {
 showsPrec :: Int - a - String - String,
 show :: a - String,
 shows :: a - String - String,
 showsList :: [a] - String - String
   }

showEither :: (ShowDict a, ShowDict b) - ShowDict (Either a b)
showEither (sda, sdb) = ShowDict ...

Note that inside the functions returned by showEither we've lost the
parent dictionaries sda/sdb.

However we know the behavior of these functions, and you can hack around it
with a manual show instance that takes advantage of that knowledge:

instance Show t = Show (AV t) where
show (AVLeft a) = drop 5 $ show (Left a)

The 'drop 5' takes off the 'Left ' in the returned string.  To be a bit
smarter you'd also look for surrounding parens and remove them as well, but
this is how you could solve your problem.

All this said, I agree that the presence of 'arr' in Arrow is a problem for
many types of generalized computing.  It overly constrains what can be an
arrow, in my opinion.  I think a better analysis of the primitives required
for arrow notation to work would solve a lot of problems of this type.

  -- ryan

On Sat, May 7, 2011 at 10:14 PM, Nicholas Tung nt...@ntung.com wrote:

 Dear all,

 I'd like to write a function maybeShow :: a - Maybe String, which
 runs show if its argument is of class Show.

 The context and motivation for this are as follows. I have a GADT type
 which encapsulates abstract-value computation (or constants or error codes),
 a snippet of which is below.

 data AV t where
 AVLeft :: AV a - AV (Either a b)

 This is used to implement an arrow transformer, and due to Arrows
 mapping all Haskell functions, I cannot put some kind of qualification on
 the constructor, like AVLeft :: Show a = 

 Of course any replies are welcome, but I do need something implemented
 and stable. If there are GHC-compatible hacks, even an unsafeShow :: a -
 String, that'd be great. I'd also prefer not to branch on all types which
 could possibly be maybeShow's argument.



 (Concretely, if I have newtype AVFunctor a b c = AVF (a (AV b) (AV
 c)), then the Arrow class declaration forces all types, c.f. variable b, to
 be potential variables of type AV),

 class (Category a) = Arrow a where
   arr :: (b - c) - a b c


 Thanks very much,
 Nicholas — https://ntung.com — CS major @ UC Berkeley

 p.s. I posted this question on StackOverflow if you care to get brownie
 points there, http://goo.gl/PrmYW

 p.s. 2 -- if there is a general dump var function in ghci, which does
 more than :info, I'd love to know :)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inputs to classic FRP: unsafeInterleaveIO/unsafePerformIO

2011-04-28 Thread Ryan Ingram
On Tue, Apr 26, 2011 at 11:44 PM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 However, even in a demand-driven implementation, there is one optimization
 that I would like make: when there are multiple external events, say e1 and
 e2, the network splits into subnetworks that react only to one of the
 inputs. For instance, your example would split into two graphs

  e1   e2
  |  \ |  \
  e3  e4 and   e3  e4
  |   ||   |
  e5  e5   e5  e5

 that are independent of each other. Unlike successful filters, these
 subnetworks are known *statically* and it's worth splitting them out.


Yeah, I realize that as well, although you can get the same problem with a
single source, it just makes the network a bit more complicated:

e0 = source
e1 = fromLeft $ filter isLeft e1
e2 = fromRight $ filter isRight e1
-- rest of network the same

Anyways, the problem I was getting at is that lets say that e1 and e2 are
both Event Bool, and e1 has a True event at the same time that e2 has a
False event.

Then a behavior derived from e3 is False for that time (assuming behaviors
take the 'last' event in the list?), and a behavior from e4 is True for that
time.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inputs to classic FRP: unsafeInterleaveIO/unsafePerformIO

2011-04-26 Thread Ryan Ingram
Apfelmus, I hope you don't abandon your efforts, at least for the selfish
reason that I enjoy reading your blog entries about trying to implement it!

I was looking at your last entry and trying to understand if/how you solve
the order-dependency problem for events.  In particular:

source events e1, e2

e3 = e1 union e2
e4 = e2 union e1

e5 = e3 union e4

The graph from your description would look something like


e1  e2
| \-A  / \   A-\
e3e3 e4e4
| |   ||
e5e5 e5e5

When I was looking at the FRP problem before, it felt a lot like the
'adaptive' problem, so I was looking at Umut Acar's papers and trying to
build something along those lines.  But perhaps your API is simple enough
that you don't need that degree of complexity.

  -- ryan

On Tue, Apr 26, 2011 at 12:29 AM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Edward Amsden wrote:

 As far as I can tell, with classic FRP implementations (those which
 use behaviors as a first-class abstraction), the only way to create a
 behavior or
 event based on some external input (for instance keypresses or
 microphone input) is to do something with unsafePerformIO or
 unsafeInterleaveIO. A behavior is a value, which when evaluated at a
 specific time would have to either block its evaluation until input
 could be read, or check the input at that particular time.

 Is there any other way of implementing external behaviors besides that?


 Yes, there are other ways, see for example the implementation here:
 http://tinyurl.com/frp-automaton . This is essentially a pure variant of
 Ryan's implementation.

 My implementation has a serious problem, namely that sharing is lost. I
 think this is the case for Ryan's implementation as well. The state of a
 behavior will be duplicated and updates multiple times. This can be fixed by
 observing sharing, of course.

 I'm currently working on a push-driven FRP implementation. (Though I'm
 getting second thoughts as to whether the small increase in efficiency is
 worth the implementation cost.) See also

 http://apfelmus.nfshost.com/blog/2011/04/24-frp-push-driven-sharing.html



 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell from SML - referrential Transparency?!

2011-04-25 Thread Ryan Ingram
I've been working on Haskell for quite a while and it's not too often that a
beginner shows me a new trick--this trick with trace seems really cool and I
hadn't seen it before.

f x | trace (f  ++ show x) False = undefined
f ... -- rest of regular definition

Makes it really easy to add the trace and then later comment it out.  One
problem I always have with traces the usual way is that adding/removing them
is kind of annoying--it can change the indentation of your code, etc.  So
this trick is really useful!

Thanks Gregory!

  -- ryan


On Tue, Apr 19, 2011 at 12:38 PM, Gregory Guthrie guth...@mum.edu wrote:

 Oops;
 No - my (further) mistake. It is not IO() - f1 returns a data structure
 which implements Show, and the test is really:

Test1 = do print Test1:
  print f1
 (etc..)

 Thanks for the alert on trace.

 I used it like this:
 allocate :: Store - (Store, Location)
 allocate ( Store(bot,top,sto) ) |
 trace(allocate ++ show bot ++ show top)
  False = undefined
 allocate ( Store(bot,top,sto) )  =
let newtop  = top+1

 and it does seem to show every allocation on the first run of f1, but then
 nothing on the second.
 SO it is not just a first call to allocate, but all calls under an
 invocation of f1 that don't show.
 Makes me wonder if f1 is even being re-evaluated.

 I did post the code - but don't expect anyone to really wade through and
 debug for me!  :-)
 (The issues that I am asking about are a9b, a9bb at line 435, 438)
 http://hpaste.org/45851/haskell_from_sml_question

 thanks for the help.
 ---

  -Original Message-
  From: Daniel Fischer [mailto:daniel.is.fisc...@googlemail.com]
  Sent: Tuesday, April 19, 2011 2:16 PM
  To: haskell-cafe@haskell.org
  Cc: Gregory Guthrie
  Subject: Re: [Haskell-cafe] Haskell from SML - referrential
 Transparency?!
 
  On Tuesday 19 April 2011 21:10:09, Gregory Guthrie wrote:
   I am pretty new to Haskell, so need some clarification.
   I am porting some code from SML, and getting a result that surprises
 me.
  
   I basically have some functions which work like this:
   f1 =  fa fb fc
   test1 = do print test1:
f1
 
  So f1 :: IO something
 
  Being an IO-action, f1 can return different things in different
 invocations since the world in
  which it runs has changed (it might read a file which was modified
 between the first and the
  second invocation, for example).
 
  
   But I ran a few tests, and got odd results - so I ran the same  test
   function twice, and got different results - that was my surprise. I
   did
   this:
   f1 =  fa fb fc
   f2 =  fa fb fc
   test2 = do print test1:
f1
f2
  
   and I get different results from the two executions (f1,f2), even
   though they have exactly the same definition. Reversing their order,
   gives the exact same results (i.e. the results are still different, and
 in the
   same original order as f2;f1). Even doing   (f1;f1) gives two different
   results.
 
  Depending on what f1 does, that may be perfectly normal or a serious bug.
  We'd need to see more of the code to determine which.
 
  
   Seems to me that by referential transparency, I should always get the
   same result from the function(s).
  
   So, I added some Debug.trace to the argument functions which are used,
   and I get a trace from the first call(s), but none from the second
   one(s), although I do get the result from each.
 
  Did you do it in the form
 
  fa = trace (fa) realFa
 
  ?
 
  Then the trace is only evaluated the first time fa is evaluated, even if
 fa is called later
  again.
 
  
   It is as if because of the laziness, it someone cached some of the
   intermediate results, so did not re-invoke the functions.
  
   Anyway, totally confused. I must be missing something significant here.
   Thanks for any clarification! (The original code is a bit long, so I
   did not include here...)
 
  http://hpaste.org/
 
  perhaps?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inputs to classic FRP: unsafeInterleaveIO/unsafePerformIO

2011-04-25 Thread Ryan Ingram
Of course, you could have the 'interpretation' function be non-pure.

For example:

-- Library functions for a hypothetical FRP system
pollEvent :: IO [a] - Event a
behavior :: a - Event a - Behavior a
accumB :: b - (b - a - b) - Event a - Behavior b
accumE :: b - (b - a - b) - Event a - Event b
union :: Event a - Event a - Event a
runFRP :: (a - IO Bool) - Behavior a - IO ()
-- Event  Behavior become instances of Functor  Applicative

-- and now a hypothetical implementation
data Event a where
   Event :: s -- initial state
   - (s - IO ([a], s))  -- tick
   - Event a
data Behavior a = Behavior a (Event a)

pollEvent act = Event () $ \() - do
 xs - act
 return (xs, ())

behavior = Behavior

union (Event sL0 tickL) (Event sR0 tickR) = Event (sL0,sR0) tick where
tick (sL, sR) = do
(ls, sL') - tickL sL
(rs, sR') - tickR sR
return (ls ++ rs, (sL', sR'))

accumB b0 f e = Behavior b0 $ accumE b f e

accumE b0 f (Event s0 tickE) = Event (b0, s0) tick where
tick (b, s) = do
(as, s') - tickE s
let bs = scanl f b as
return (bs, (last bs, s'))

-- Functor, Applicative instances are pretty easy and left as an exercise

runFRP tick (Behavior b0 (Event s0 e)) = runFRP' b0 s0 where
runFRP' b s = do
(bs, s') - e s0
let val = last (b:bs)
k - tick b
when k $ runFRP tick (Behavior

k - tick b




-- sample application
keypress :: Event Char
keypress = pollEvent getCurrentPressedKeys where
   getCurrentPressedKeys = undefined -- exercise for the reader







On Mon, Apr 25, 2011 at 5:28 PM, Edward Amsden eca7...@cs.rit.edu wrote:

 As far as I can tell, with classic FRP implementations (those which
 use behaviors as a first-class abstraction), the only way to create a
 behavior or
 event based on some external input (for instance keypresses or
 microphone input) is to do something with unsafePerformIO or
 unsafeInterleaveIO. A behavior is a value, which when evaluated at a
 specific time would have to either block its evaluation until input
 could be read, or check the input at that particular time.

 Is there any other way of implementing external behaviors besides that?

 --
 Edward Amsden
 Student
 Computer Science
 Rochester Institute of Technology
 www.edwardamsden.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inputs to classic FRP: unsafeInterleaveIO/unsafePerformIO

2011-04-25 Thread Ryan Ingram
Mail fail, haha.  Code fixed.
For example:

-- Library functions for a hypothetical FRP system
pollEvent :: IO [a] - Event a
behavior :: a - Event a - Behavior a
accumB :: b - (b - a - b) - Event a - Behavior b
accumE :: b - (b - a - b) - Event a - Event b
union :: Event a - Event a - Event a
runFRP :: (a - IO Bool) - Behavior a - IO ()
-- Event  Behavior become instances of Functor  Applicative

-- and now a hypothetical implementation
data Event a where
   Event :: s -- initial state
   - (s - IO ([a], s))  -- tick
   - Event a
data Behavior a = Behavior a (Event a)

pollEvent act = Event () $ \() - do
 xs - act
 return (xs, ())

behavior = Behavior

union (Event sL0 tickL) (Event sR0 tickR) = Event (sL0,sR0) tick where
tick (sL, sR) = do
(ls, sL') - tickL sL
(rs, sR') - tickR sR
return (ls ++ rs, (sL', sR'))

accumB b0 f e = Behavior b0 $ accumE b f e

accumE b0 f (Event s0 tickE) = Event (b0, s0) tick where
tick (b, s) = do
(as, s') - tickE s
let bs = scanl f b as
return (bs, (last bs, s'))

-- Functor, Applicative instances are pretty easy and left as an exercise

runFRP tick (Behavior b0 (Event s0 e)) = runFRP' b0 s0 where
runFRP' b s = do
(bs, s') - e s0
let b' = last (b:bs)
k - tick b'
when k $ runFRP' b' s'

-- sample application
keypress :: Event Char
keypress = pollEvent getCurrentPressedKeys where
   getCurrentPressedKeys = undefined -- exercise for the reader

-- application prints the last key you pressed until you press 'q'
main = runFRP tick keypress where
tick k = print k  return (k /= 'q')
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Programming Chalenges: The 3n+1 problem

2011-04-14 Thread Ryan Ingram
So if we were to emulate your Java solution, we'd do

import Data.Array

cacheSize :: Int
cacheSize = 65536

table :: Array Int Integer
table = listArray (1,cacheSize) (1 : map go [2..cacheSize]) where
go n
| even n = 1 + lookup (n `div` 2)
| otherwise = 1 + lookup (3 * n + 1)

lookup :: Integer - Integer
lookup n
| n  cacheSize = table ! (fromInteger n)
| even n = 1 + lookup (n `div` 2)
| otherwise = 1 + lookup (3 * n + 1)

The rest of the code is just some simple i/o.

The table is filled up lazily as you request values from it.

On Thu, Apr 14, 2011 at 3:29 AM, Dmitri O.Kondratiev doko...@gmail.comwrote:

 3n+1 is the first, warm-up problem at Programming Chalenges site:

 http://www.programming-challenges.com/pg.php?page=downloadproblemprobid=110101format=html

 (This problem illustrates Collatz conjecture:

 http://en.wikipedia.org/wiki/3n_%2B_1#Program_to_calculate_Collatz_sequences
 )

 As long as the judge on this site takes only C and Java solutions, I
 submitted in Java some add-hock code (see at the end of this message) where
 I used recursion and a cache of computed cycles. Judge accepted my code and
 measured  0.292 sec with best overall submissions of 0.008 sec to solve the
 problem.

 *** Question: I wonder how to implement cache for this problem in Haskell?
 At the moment, I am not so much interested in the speed of the code, as in
 nice implementation.

 To illustrate my question I add the problem description and my Java
 solution at the end of this message.
 Thanks!

 *** Problem

 Consider the following algorithm to generate a sequence of numbers. Start
 with an integer *n*. If *n* is even, divide by 2. If *n* is odd, multiply
 by 3 and add 1. Repeat this process with the new value of *n*, terminating
 when *n* = 1. For example, the following sequence of numbers will be
 generated for *n* = 22:
 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
 It is *conjectured* (but not yet proven) that this algorithm will
 terminate at *n* = 1 for every integer *n*. Still, the conjecture holds
 for all integers up to at least 1, 000, 000.

 For an input *n*, the *cycle-length* of *n* is the number of numbers
 generated up to and *including* the 1. In the example above, the cycle
 length of 22 is 16. Given any two numbers *i* and *j*, you are to
 determine the maximum cycle length over all numbers between *i* and *j*, *
 including* both endpoints.

 Input The input will consist of a series of pairs of integers *i* and *j*,
 one pair of integers per line. All integers will be less than 1,000,000 and
 greater than 0.

 OutputFor each pair of input integers *i* and *j*, output *i*, *j* in the
 same order in which they appeared in the input and then the maximum cycle
 length for integers between and including *i* and *j*. These three numbers
 should be separated by one space, with all three numbers on one line and
 with one line of output for each line of input.

 Sample Input

 1 10
 100 200
 201 210
 900 1000

 Sample Output

 1 10 20
 100 200 125
 201 210 89
 900 1000 174

 *** my Java solution

 import java.io.BufferedReader;
 import java.io.InputStreamReader;
 public class Main {
   final static BufferedReader reader_ = new BufferedReader(new 
 InputStreamReader(System.in));
   /**
* @param args
*/
   public static void main(String[] args) {
   new Problem().run();
   }   
   static String[] ReadLn() {
   String[] tokens = null;
   try {
   String line = reader_.readLine();
   String REGEX_WHITESPACE = \\s+;
   String cleanLine = 
 line.trim().replaceAll(REGEX_WHITESPACE,  );
   tokens = cleanLine.split(REGEX_WHITESPACE); 
 
   } catch (Exception e) {}
   return tokens;
   }
 }

 class Problem implements Runnable {
   long CACHE_SIZE = 65536;
   private final long[] cache_ = new long[(int) CACHE_SIZE];
   /**
* Compute cycle length for a single number
*
* @param n number for which we find cycle length
* @return cycle length
*/ 
   long cycleLen(long n) {
   long len = 1;
   if (n != 1) {
   len = getFromCache(n);
   if (len == 0) { //not yet in cache
   // Recursively compute and store all 
 intermediate values of cycle length
   if ((n  1) == 0) {
   len = 1 + cycleLen(n  1);
   } else {
   len = 1 + cycleLen(n * 3 + 1);
   }
   putInCache(n, len);
   }
   }
   return len;
   }
   
   void putInCache(long n, long len) {
   if(n  CACHE_SIZE) {
   

Re: [Haskell-cafe] fundeps = type family

2011-04-03 Thread Ryan Ingram
On Sun, Apr 3, 2011 at 1:00 PM, Tad Doxsee tad.dox...@gmail.com wrote:

   Equality constraints ... enable a simple translation of programs
using functional dependencies into programs using family
synonyms instead.

 So I tried:

 class (T s ~ a) = ShapeC a s where
  type T s :: *
  draw :: s - String
  copyTo :: s - T s - T s - s

 but got a compile error:

  Alas, GHC 7.0 still cannot handle equality superclasses: T s ~ a

 So my question is, how does one convert the above code to use type
 families instead of functional dependencies?  Is one technique
 preferable over another?


Sadly the documentation assumes the feature that you show is missing.  That
said, you don't need that feature for the simple FD you have.

Just do

class ShapeC s where
   type T s :: *
   draw :: s - String
   copyTo :: s - T s - T s - s


This code should work:

data ShapeD a = forall s. (ShapeC s, a ~ T s) = MkShapeD s

instance ShapeC (ShapeD a) where
type T (ShapeD a) = a
draw (MkShapeD s) = draw s
copyTo (MkShapeD s) x y = MkShapeD (copyTo s x y)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to optmize this code?

2011-03-31 Thread Ryan Ingram
On Thu, Mar 31, 2011 at 7:29 AM, Daniel Fischer 
daniel.is.fisc...@googlemail.com wrote:

 Err, terminology problem here.
 Strictly speaking, a function is strict iff

 f _|_ = _|_

 while we are talking here about evaluation strategies, so we should better
 have spoken of eager vs. deferred evaluation.
 A non-strict function has different semantics from a strict one by
 definition.

 If you have a strict function, you may evaluate its argument eagerly
 without changing the result¹, while eager evaluation of a non-strict
 function's argument may produce _|_ where deferred evaluation wouldn't.


This is almost but not entirely true.  Consider

f x = error f is not implemented

Clearly, f _|_ = _|_, so f is strict.

f (error bang!)

might, depending on how strictness analysis proceeds, generate an f is not
implemented error or a bang! error.  But that's only observable at the IO
level, and the optimization is considered important enough, that potentially
generating a different exception is allowed.

I think this paper covers some of the details:
http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Use of uninstantiated type class

2011-03-06 Thread Ryan Ingram
On Fri, Mar 4, 2011 at 3:45 PM, Yves Parès limestr...@gmail.com wrote:

 Hello,

 For testing purposes, I am trying to make an overlay to IO which carries a
 phantom type to ensure a context.
 I define contexts using empty type classes :

 class CtxFoo c
 class CtxBar c

 The overlay :

 newtype MyIO c a = MyIO (IO a)

 Then I define some methods that run only a specific context :

 runFoo :: (CtxFoo c) = MyIO c a - IO a
 runFoo (MyIO x) = x

 runBar :: (CtxBar c) = MyIO c a - IO a
 runBar (MyIO x) = x

 And then an action that runs in context 'Foo' :

 someAction :: (CtxFoo c) = MyIO c ()
 someAction = putStrLn FOO

 Then I run it :

 main = runFoo someAction

 But obiously, GHC complains that my type 'c' remains uninstantiated :

 Ambiguous type variable `c' in the constraint:
   (CtxFoo c) arising from a use of `runFoo'
 Probable fix: add a type signature that fixes these type variable(s)
 In the expression: runFoo someAction
 In an equation for `main': main = runFoo someAction


 Is there a way to deal with this ?
 The interest of using type classes and not empty types to represent the
 contexts is that it stays simple, and that I can do that :

 someAction2 :: (CtxFoo c, CtxBar c) = MyIO c ()
 someAction2 = putStrLn FOO and BAR

 ... a function that can run in both contexts.


data X
instance CtxFoo X

runFoo (someAction :: MyIO X ())

data Y
instance CtxFoo Y
instance CtxBar Y

runFoo (someAction2 :: MyIO Y ())
runBar (someAction2 :: MyIO Y ())

runFoo (someAction :: MyIO Y ()) -- also works since Y provides both a Foo
and Bar context)

  -- ryan



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Auto elimination of MVars using a monad or monad transformer.

2011-02-26 Thread Ryan Ingram
You might want to take a look at
http://hackage.haskell.org/package/Adaptivesince it seems really
similar to what you are trying to do.  In fact, you
might also want to google 'Functional Reactive Programming'.

  -- ryan

On Thu, Feb 24, 2011 at 10:41 PM, Chris Dew cms...@gmail.com wrote:

 Hello, just like everyone else, I have a question about monads.  I've
 read the tutorials, written one monad myself (not in this email), but
 I still consider myself a Haskell beginner.

 * Does GHC eliminate unneeded MVars during compilation?

 I'm expecting that it doesn't, as that would mean optimising away
 ForkIOs, which would be quite a thing to do.  I've included example
 code below.

 * Is there a monad which allows their automatic elimination of MVars
 (or their creation only when necessary)?

 This would be similar to how the IO monad allows you to do purely
 functional things with a do block, using let.

 I've had a go at a lifting function, which wraps a pure function into
 an IO action which forever reads from one MVar and writes to another.
 What I'm looking for is some form of Monadic context in which many
 pure functions, MVar fillers and MVar consumers could be linked
 together, where only the necessary MVars remain (or were created) at
 compilation time.

 * Would this be a monad, or a monad transformer?

 * Can you specialise a monad transformer on a single base (in this
 case IO) so that you can use forkIO in the bind or return?

 Thanks,

 Chris.


 module Main (
 main
 )
 where

 import Control.Concurrent (forkIO, MVar, newEmptyMVar, putMVar,
 takeMVar, ThreadId, threadDelay)
 import Control.Monad (forever)

 stepA :: MVar String - IO ()
 stepA boxa = forever $ do
  line - getLine
  putMVar boxa line

 stepB :: MVar String - IO ()
 stepB boxb = forever $ do
  line - takeMVar boxb
  putStrLn line

 -- This simply wraps a string in brackets.
 bracket :: String - String
 bracket x = ( ++ x ++ )

 -- This lifts a function into an action which forever performs the function
 -- between the two MVars given.
 lft :: (a - b) - MVar a - MVar b - IO ()
 lft f c d = forever $ do
 x - takeMVar c
 putMVar d (f x)

 -- Just like C's main.
 main :: IO ()
 main = do
  box - newEmptyMVar
  box2 - newEmptyMVar
  forkIO $ stepA box
  forkIO $ lft bracket box box2
  forkIO $ stepB box2
  threadDelay 1000 -- Sleep for at least 10 seconds before exiting.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help for the usage on mfix

2011-02-23 Thread Ryan Ingram
Just write a loop:

 let loop gs gu
| Just z - find_obj gu usyms = do
...
(gs', gu') - handle_obj_ar ...
loop gs' gu'
| otherwise = return (gs,gu)
 (gs, gu) - loop def undef

mfix is for when you have mutually recursive data but you want the IO
operation to only execute once.  It's most useful when working with some
sort of lazy data structure like a list, tree, or graph.

I can't come up with an example for IO off the top of my head, but for the
ICFP contest this year I wrote a gate/wire embedded language which had code
that looks like this:

sample wireIn = do
   rec
   (wireOut,a) - gate (wireIn,d)
   (d,b) - gate (a, b)
   return wireOut

which would create a circuit like this:

---in-[  ]--out
  +-d-[  ]--a--[  ]-d---+
  | +-b-[  ]-b-+ |
  | +---+ |
  +---+

This code translates to something like

sample wireIn = do
(wireOut, _, _, _) - mfix $ \(_, b, d) - do
(wireOut', a) - gate (wireIn, d)
(d', b') - gate (a,b)
return (wireOut', b', d')
return wireOut'

The key is that gate was lazy in its arguments; the gate didn't care what
its input wires were, it just needed them to exist at the time you asked for
the entire circuit definition.  mfix says 'run the *effects* in this code
once, but the *data* might be recursive'.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] coding a queue with reactive

2011-02-14 Thread Ryan Ingram
One way to think about Reactive's notion of Future is as a single
element of an event stream--something that might happen (yielding a
value) some time in the future.

'mempty' on futures is a future that never happens, and 'mappend' says
to pick the first of two futures to happen.
m = k waits for m's future to happen, then passes the result onto k.

So my definition of stateMachine says:

- Wait for either the future returned by 'run s0' or the first event
in the input event stream.
- If the future returned by 'run s0' happens first, we have an event
instance--return an event whose 'rest of the event stream' is the next
state of the state machine
- If the future from the input event stream happens first, update the
state and continue without spawning an event.
- If they happen at the same time, 'run s0' wins (mappend is left-biased)

Good luck!

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] coding a queue with reactive

2011-02-11 Thread Ryan Ingram
Hi Sam.  I don't know much about the performance problems you are
seeing, but I think your solution is more cleanly implemented just
under the event level with futures.

I think the reactive function you want has a type like this:

stateMachine :: s - (a - s - s) - (s - Future (b, s)) - Event a - Event b

I don't think that function currently exists in Reactive, but the semantics are:

stateMachine initialState updateState runner input
  | runner initialState - (b, nextState) happens before the first
event in input = output the b, then stateMachine nextState
updateState runner input
  | input - (Stepper a input') happens first = let nextState =
updateState a nextState, then stateMachine nextState updateState
runner input'

Here's my thoughts on an implementation:

stateMachineF s0 upd run (Event inp) = do
x - mappend (Left $ run s0) (Right $ inp)
case x of
Left (b,sNext) - return (Stepper b (stateMachineF sNext upd run inp))
Right (Stepper a inpNext) - stateMachineF (upd a s0) upd run inpNext

stateMachine s0 upd run inp = Event $ stateMachineF s0 upd run inp

Then we can do something like

queue delay = stateMachine Nothing upd run . withTimeE where
run Nothing = mempty
run (Just (t, a, q)) = future t (a, sNext) where
sNext = fmap (\(a', q') - (t + delay, a', q')) (viewQ q)
upd (time, x) Nothing = Just (time + delay, x, emptyQ)
upd (time, x) (Just (t, a, q)) = Just (t, a, pushQ x q)

given these queue functions:

emptyQ :: Queue a
viewQ :: Queue a - Maybe (a, Queue a)
pushQ :: a - Queue a - Queue a

which can be pretty easily implemented on the standard functional
queue structure

data Queue a = Q [a] [a]
emptyQ = Q [] []
pushQ a (Q fs bs) = Q fs (a:bs)
viewQ (Q [] []) = Nothing
viewQ (Q (a:fs) bs) = Just (a, Q fs bs)
viewQ (Q [] bs) = viewQ (Q (reverse bs) [])


On Wed, Feb 9, 2011 at 1:26 PM,  sam.roberts.1...@gmail.com wrote:
 Hi all,

 I hope someone is interested in helping me out with the reactive library.
 I am trying to implement a function queue in reactive:

 queue :: Double - Event a - Event a

 This is a simple queue: events from the event stream coming into the queue,
 queue up waiting to be processed one by one. Processing an event takes a
 constant amount of time for every event. The output of the queue function
 is the stream of processed events.

 My current (deficient) implementation of the queue function is:

 queue dt eventsIn =
 do
 (a,exitT) - withExitTime eventsIn
 _ - atTime exitT
 return a
 where
 withExitTime = scanlE calcExitTime (undefined, -1/0) . withTimeE
 calcExitTime (_,prevExitT) (a,inT) = (a, (max inT prevExitT) + dt)

 I am having three problems.

 1 - I find my implementation of the queue is less clear then an imperative
 description of a queue.

 2 - I rely on being able to calculate the exit time of an event when it
 first arrives at the queue, whereas an imperative queue would simply store
 the event in queue and only need to calculate the output time once the event
 was popped off the queue. If I want to do something similar with my
 function,
 I think I need to make some sort of recursive definition of a queue which
 responds to it's own exit events. I've tried to code this up, but have not
 managed to wrap my brain around the concept.

 3 - The code performs horribly! I am guessing that this is because I have
 not
 told reactive that the exit events preserve the ordering of the input
 events,
 but I'm not sure how to encode that relationship in reactive.

 (It's worth noting here that I actually have a fourth problem too: I get
 linker errors while trying to compile a profiling version of the program ...
 but that's a separate topic.)

 It's also worth noting, from a performance point of view, that a much
 simpler
 delay function with similar use of the bind function performs badly as
 well:

 delay :: Double - Event a - Event a
 delay dt es = do (e,t) - withTimeE es
 _ - atTime (t+dt)
 return e

 I'd appreciate any light that anyone could shed on any of these problems.
 If there's a better way of structuring my queue function, or if there's a
 better way of changing event times in reactive, I am open to all
 suggestions.

 Many thanks,
 Sam
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Instancing Typeable for monad transformers?

2011-02-03 Thread Ryan Ingram
Can you just wrap it?  Something like this:

-- put your monad type here
type M a = Iteratee ... a

data W a = W (Iteratee ... a) deriving Typeable
unW (W x) = x

toDynW :: Typeable a = M a - Dynamic
toDynW x = toDynamic (W x)

castM :: (Typeable x, Typeable a) = x - Maybe (M a)
castM = unW . cast

  -- ryan

On Tue, Feb 1, 2011 at 10:02 PM, John Millikin jmilli...@gmail.com wrote:
 Is there any reasonable way to do this if I want to cast a monadic
 value? For example:

 castState :: (Typeable a, Typeable s, Typeable1 m, Typeable b) = a - Maybe 
 (StateT s m b)
 castState = Data.Typeable.cast

 None of the common monad transformers declare instances of Typeable,
 so I don't know if the concept itself even works.

 The use case here is one of my library users wants to return an
 Iteratee from code running in hint, which requires any extracted
 values be typeable. My first attempt at an extension-free instance is
 something like this:

 import Data.Enumerator
 import Data.Typeable

 instance (Typeable a, Typeable1 m) = Typeable1 (Iteratee a m) where
       typeOf1 i = rep where
               typed :: (a - b) - b - a - a
               typed _ _ a = a

               ia :: a - Iteratee a m b
               ia = undefined

               im :: m c - Iteratee a m b
               im = undefined

               rep = mkTyConApp (mkTyCon Data.Enumerator.Iteratee) [tyA, 
 tyM]
               tyA = typeOf (typed ia i undefined)
               tyM = typeOf1 (typed im i undefined)

 which, besides being ugly, I have no idea if it's correct.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Applicative = Monad: Call for consensus

2011-01-24 Thread Ryan Ingram
On Fri, Jan 21, 2011 at 7:58 PM, Casey Hawthorne cas...@istar.ca wrote:
 uj supplied this:

 About the discussion
 putStrLn (readLn + (5 :: Int))..

 I'll write it as the following line,

 importing Control.Applicative
 main = (+) readLn (return 3)

 They look almost exactly same in my eyes..

You're missing some bits.

main = print = liftM2 (+) readLn (return 3)

Which I assert looks like more line noise than some perl programs I've read. :)

Now, you *can* get away with simplifying this to
main = print = (readLn :: IO Int) + 3
assuming instance Num a = Num (IO a), which sort-of works (show
instance and pattern matching are both quite broken; given the naive
implementation of fib, fib readLn will give you quite a surprising
result!)

But I think this case proves the point quite well: it's a special case
where Num turns out to be very friendly.  Why can't the whole language
be that friendly?

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Applicative = Monad: Call for consensus

2011-01-21 Thread Ryan Ingram
Interesting little paper, Tyson.

You bring up other programming languages and 'ad-hoc systems for
resolving ambiguities'; I agree with you that these systems generally
have no strong theoretical basis, but I'm not sure that's a terribly
bad thing.

I think what a programmer actually wants from ambiguity resolution is
something *predictable*; C++'s system is definitely stretching the
boundaries of predictability, but any case where I have to break out a
calculator to decide whether the compiler is going to choose
specification A or specification B for my program seems like a
failure.  I'd much rather the solution wasn't always 'the most
probable' but at least was easy for me to figure out without thinking
too hard.

The goal is to easily know when I have to manually specify ambiguity
resolution and when I can trust the compiler to do it for me.  I
didn't completely follow the math in your paper, so maybe it turns out
simply if it was implemented, but it wasn't clear to me.  At the
least, I think you should add examples of the types of ambiguity
resolution you'd like the compiler to figure out and what your
probability measure chooses as the correct answer in each case.

Anyways, thanks for the interesting read.  I'm excited to see work on
making a better type *inference* system, since much of the work lately
seems to be on making a better *type* system at the cost of more often
manually specifying types.

I work in a traditional programming industry, and most of the people
from work that I talk to about Haskell are frustrated that they can't
just write putStrLn (readLn + (5 :: Int)) and have the compiler figure
out where the lifts and joins go.  After all, that just works in C[1]!
 What's the point of having the most powerful type system in the
universe if the compiler can't use it to make your life easier?

  -- ryan

[1] sample program:
int readLn(); // reads a line from stdin and converts string to int
void putStrLn(int x); // prints an int to stdout

void main() { putStrLn(readLn() + 5); }

On Fri, Jan 21, 2011 at 8:43 AM, Tyson Whitehead twhiteh...@gmail.com wrote:
 On January 19, 2011 15:28:33 Conor McBride wrote:
 In each case, the former has (++) acting on lists of strings as pure
 values,
 while the latter has (++) acting on strings as values given in
 []-computations.

 The type [String] determines a domain, it does not decompose uniquely
 to a
 notion of computation and a notion of value. We currently resolve this
 ambiguity by using one syntax for pure computations with [String] values
 and a different syntax for [] computations with String values.

 Just as we use newtypes to put a different spin on types which are
 denotationally the same, it might be worth considering a clearer (but
 renegotiable) separation of the computation and value aspects of types,
 in order to allow a syntax in which functions are typed as if they act
 on
 *values*, but lifted to whatever notion of computation is ambient.

 Yes.  That makes sense.  Thank you both for the clarification.  The idea of
 explicitly separating the two aspects of types is an interesting one.

 The automated approach I had been thinking of was to always take the simplest
 context possible.  (i.e., for the above, list of strings as pure values).

 To this end I've been working on a measure for the complexity of the
 application operator.  I've got a draft at

 http://www.sharcnet.ca/~tyson/haskell/papers/TypeShape.pdf

 I'm still working on my thinking on polymorphic types though, so everything
 from section 2.2 onwards is subject to change (especially 2.3 and the
 conclusion).

 Cheers!  -Tyson

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] class-instance

2011-01-20 Thread Ryan Ingram
On Wed, Jan 19, 2011 at 11:56 PM, Patrick Browne patrick.bro...@dit.ie wrote:
 I am trying to see what how this requirement can be represented using
 just the normal instance-implements-class relation for a comparison with
 a specification language approach.

 If there is no simple way to do this using type classes then I am
 obviously mis-using the technique.

Going back to your original message:

-- My intension is that the PERSON class should *specify*
-- that a person has a constant id called p1
-- and that a person has a name that can be found from the id.

You can do this with type level numerals.

data Z
data S a

type Zero = Z
type One = S Zero
type Two = S One
-- etc

class PersonId a where
name :: a - String

instance PersonId Z where
name _ = John
instance PersonId (S Z) where
name _ = Julie

main = putStrLn (name (undefined :: One))  -- prints Julie

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   7   8   >