Re: [Haskell-cafe] how to nicely implement phantom type coersion?

2005-12-08 Thread Thomas Jäger
Hello,

Since you're already using GADTs, why not also use them to witness type
equality:

import GHC.Exts

data Patch a b = Patch Int Int

data Sequential a c where
Sequential :: Patch a b - Patch b c - Sequential a c

data MaybeEq :: * - * - * where
  NotEq :: MaybeEq a b
  IsEq  :: MaybeEq a a

(=//=) :: Patch a b - Patch c d - MaybeEq b c
Patch _ x =//= Patch y _
  | x == y= unsafeCoerce# IsEq
  | otherwise = NotEq

sequenceIfPossible :: Patch a b - Patch c d - Maybe (Sequential a d)
sequenceIfPossible p q
  | IsEq - p =//= q = Just $ Sequential p q
  | otherwise= Nothing

Notice the usefulness of pattern guards. EqContext could be defined as

data EqContext :: * - * - * where
  EqWitness :: EqContext a a

(though I ususally prefer to just call both the data type and the
constructor 'E'.)


Thomas

On Thu, 2005-12-08 at 09:23 -0500, David Roundy wrote:
 The trickiness is that we need to be able to check for equality of two
 patches, and if they are truly equal, then we know that their ending states
 are also equal.  We do this with a couple of operators:
 
 (=\/=) :: Patch a b - Patch a c - Maybe (EqContext b c)
 (=/\=) :: Patch a z - Patch b z - Maybe (EqContext a b)
 
 data EqContext a b =
 EqContext { coerce_start :: Patch a z - Patch b z,
 coerce_end :: Patch z a - Patch z b,
 backwards_coerce_start :: Patch b z - Patch a z,
 backwards_coerce_end :: Patch z b - Patch z a
   }
 
 where we use the EqContext to encapsulate unsafeCoerce so that it can only
 be used safely.  The problem is that it's tedious figuring out whether to
 use coerce_start or coerce_end, or backwards_coerce_end, etc.  Of course,
 the huge advantage is that you can't use these functions to write buggy
 code (at least in the sense of breaking the static enforcement of patch
 ordering).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ReaderT and concurrency

2005-11-16 Thread Thomas Jäger
Kurt,

There are basically two ways of doing that, namely monad transformers
and implicit parameters (we actually use both techniques in lambdabot).
Implicit parameters save you a lot of conversions or explicit passing of
variables because you only need one monad (the IO monad); however they
are ghc-specific, disliked by some (not by me, though!) and the order in
which they are type-checked is suboptimal, so be prepared for some scary
error messages. They also don't allow the implementation to be hidden
completely.

If you decide to use a monad transformer, the pattern you described
(using runReaderT) can be abstracted quite nicely:

-- the names are bad, I know...
class UnliftIO m where
  -- what we actually want is m (forall a. m a - IO a), but that's
  -- impossible, so we are using cps instead.
  unliftIO :: ((forall a. m a - IO a) - IO b) - m b

  -- unliftIO is not subsumed by getUnlifterIO, afaics.
  getUnlifterIO :: m (m a - IO a)
  getUnlifterIO = unliftIO return


instance UnliftIO (ReaderT r IO) where
  unliftIO f = ReaderT $ \r - f (`runReaderT` r)


Now printAndFork doesn't need to know anything about the internals of
the monad transformer anymore:

printAndFork :: String - Integer - MyReader ()
printAndFork _   0 = return ()
printAndFork str n = do
unlift - getUnlifter
mv - ask
lift $ do
modifyMVar_ mv $ \i - do
print $ str ++ show i
return (i + 1)
forkIO . unlift $ justPrint (inner  ++ str)
printAndFork str (n - 1)


It might also be worthwhile to wrap the monad transformer into a newtype

newtype MyIO a 
  = MyIO (ReaderT (MVar ...) IO a) 
  deriving (Functor, Monad, MyReader, UnliftIO)

where MyReader is a type class that provides only the 'get' method of
the Reader class, so that the user cannot mess with the MVar. Or you
could hide the fact that you are using MVars and provide only functions
that manipulate the state (cf. the 'MS'-functions in
lambdabot/LBState.hs).


HTH,

Thomas

On Wed, 2005-11-16 at 11:51 -0500, Kurt Hutchinson wrote:
 I'm writing a program that will be using multiple threads to handle
 network activity on multiple ports in a responsive way. The treads
 will all need access to some shared data, so I'm using an MVar. So far
 so good. The problem is that passing the MVar around everywhere is
 kind of a pain, so I was hoping to use a ReaderT monad on top of the
 IO monad to handle that for me. I have that working, but one piece
 seemed a bit out of place so I wondered if there was a better way.
 Below is a small test program that presents my question.
 
 My program uses forkIO to create the separate threads (Set A), and
 some of *those* threads will need to create threads (Set B). In order
 for the ReaderT to handle the environment of the threads in Set B, do
 I have to perform another runReaderT when forking? Or is there a way
 to get the ReaderT environment automatically carried over to the newly
 created Set B thread? See the NOTE comment in the code below for the
 particular spot I'm asking about.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional dependencies and type inference

2005-08-22 Thread Thomas Jäger
Simon,

I believe there may be some nasty interactions with generalized
newtype-deriving, since we can construct two Leibniz-equal types which
are mapped to different types using fundeps:

  class Foo a where
foo :: forall f. f Int - f a

  instance Foo Int where
foo = id

  newtype Bar = Bar Int deriving Foo

  -- 'Equality' of Int and Bar
  eq :: forall f. f Int - f Bar
  eq = foo

  class Dep a b | a - b

  instance Dep Int Bool
  instance Dep Bar Char

  newtype Baz a = Baz { runBaz :: forall r. Dep a r = a - r }

  conv :: (forall f. f a - f b) - 
(forall r. Dep a r = a - r) - (forall r. Dep b r = b - r)
  conv f g = runBaz $ f (Baz g)

  bar = conv eq

Here, after type erasure, 'bar' is the identity function . Ghc infers

  bar :: (forall r. (Dep Int r) = Int - r) - Bar - Char

This is not valid as an explicit type signature, but presumably the
proposal implies that we could type bar as

  bar :: (Int - Bool) - Bar - Char

instead. Now

  \x - bar' (const x) (Bar 0) :: Bool - Char

would become an unsafe coercion function from Bool to Char.


Thomas

On 8/11/05, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
 Einar
 
 Good question.  This is a more subtle form of the same problem as I
 described in my last message.  In fact, it's what Martin Sulzmann calls
 the critical example.  Here is a boiled down version, much simpler to
 understand.
 
 module Proxy where
 
 class Dep a b | a - b
 instance Dep Char Bool
 
 foo :: forall a. a - (forall b. Dep a b = a - b) - Int
 foo x f = error urk
 
 bar :: (Char - Bool) - Int
 bar g = foo 'c' g
 
 
 You would think this should be legal, since bar is just an instantation
 of foo, with a=Char and b=Bool.  But GHC rejects it.  Why?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Control.Monad.Cont fun

2005-07-25 Thread Thomas Jäger
Hello Andrew,

On 7/25/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 getCC :: Cont r (Cont r a)
 getCC = ccc (\(c :: Cont r a - (forall b. Cont r b)) -
 let x :: forall b. Cont r b = c x in x)

 gives

 [Error]
ghc-6.2 accepts this:
  getCC :: Cont r (Cont r a)
  getCC = ccc (\(c :: Cont r a - (forall b. Cont r b)) -
   let x :: forall b. Cont r b; x = c x in c x)

ghc-6.4/6.5 will also give the mysterious error above, but the
following works fine, thanks to scoped type variables:
  getCC :: forall a r. Cont r (Cont r a)
  getCC = ccc (\c -
   let x :: forall b. Cont r b; x = c x in x)

 for which I have no riposte.  Is this a bug?
I would think so. I can't find anything in the documentation that
disallows such polymorphic type annotations.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Control.Monad.Cont fun

2005-07-23 Thread Thomas Jäger
Hi,

Sorry, I have to do a small correction to an earlier post of mine.

On 7/9/05, I wrote:
 In order to move the function (\jmp - jmp `runC` jmp) into callCC,
 the following law, that all instances of MonadCont seem to satisfy, is
 very helpful.
 
 f = callCC g === callCC (\k - f = g ((=) k . f))
This law is in fact only satisfied for some monads (QuickCheck
suggests Cont r and ContT r m). A counterexample for the reader
transfomer:

  type M = ReaderT Bool (Cont Bool)

  f :: a - M Bool
  f _ = ask

  g :: (Bool - M a) - M a
  g k = local not $ k True

  run :: M Bool - Bool
  run (ReaderT m) = m True `runCont` id

Now,
  run (f = callCC g)  == True
  run (callCC (\k - f = g ((=) k . f)))  == False

 In particular (regarding Functor as superclass of Monad), it follows
 
 f `fmap` callCC g === callCC (\k - f `fmap` g (k . f))
This law (the one I actually used) is satisfied (again, only according
to QuickCheck) by every monad I checked.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Control.Monad.Cont fun

2005-07-09 Thread Thomas Jäger
Hello Tomasz,

This stuff is very interesting! At first sight, your definition of
getCC seems quite odd, but it can in fact be derived from its
implementation in an untyped language.

On 7/7/05, Tomasz Zielonka [EMAIL PROTECTED] wrote:
 Some time ago I wanted to return the escape continuation out of the
 callCC block, like this:
 
   getCC = callCC (\c - return c)
We get the error message

test124.hs:8:29:
Occurs check: cannot construct the infinite type: t = t - t1
  Expected type: t - t1
  Inferred type: (t - t1) - m b

Haskell doesn't support infinite types, but we can get close enough by
creating a type C m b such that C m b and C m b - m b become
isomorphic:

newtype C m b = C { runC :: C m b - m b }

With the help of C we can implement another version of getCC and
rewrite the original example.

getCC1 :: (MonadCont m) = m (C m b) 
getCC1 = callCC $ \k - return (C k)

test1 :: ContT r IO ()
test1 = do
  jmp - getCC1
  liftIO $ putStrLn hello1
  jmp `runC` jmp -- throw the continuation itself, 
 -- so we can jump to the same point the next time.
  return ()

We can move the self-application of jmp into getCC to get the same
type signature as your solution, but we still rely on the auxiliary
datatype C.

getCC2 :: MonadCont m = m (m b)
getCC2 = do
  jmp - callCC $ \k - return (C k)
  return $ jmp `runC` jmp

In order to move the function (\jmp - jmp `runC` jmp) into callCC,
the following law, that all instances of MonadCont seem to satisfy, is
very helpful.

f = callCC g === callCC (\k - f = g ((=) k . f))

In particular (regarding Functor as superclass of Monad), it follows

f `fmap` callCC g === callCC (\k - f `fmap` g (k . f))

Therefore, getCC2 is equivalent to

getCC3 :: MonadCont m = m (m b)
getCC3 = callCC $ \k - return (selfApp $ C (k . selfApp)) where 
  selfApp :: C m b - m b
  selfApp jmp = jmp `runC` jmp

It is easy to get rid of C here arriving exactly at your definition of getCC.

   getCC :: MonadCont m = m (m a)
   getCC = callCC (\c - let x = c x in return x)

   getCC' :: MonadCont m = a - m (a, a - m b)
   getCC' x0 = callCC (\c - let f x = c (x, f) in return (x0, f))
For what it's worth, this can be derived in much the same way from the
(not well-typed)

getCC' x = callCC $ \k - return (k, x)

using the auxillary type

newtype C' m a b = C' { runC' :: (C' m a b, a) - m b }

 
 Besides sharing my happiness, I want to ask some questions:
 
 - is it possible to define a MonadFix instance for Cont / ContT?
It must be possible to define something that looks like a MonadFix
instance, after all you can define generally recursive functions in
languages like scheme and sml which live in a ContT r IO monad, but
this has all kinds of nasty consequences, iirc.

Levent Erkök's thesis suggests (pp. 66) that there's no implementation
of mfix that satisfies the purity law.
http://www.cse.ogi.edu/PacSoft/projects/rmb/erkok-thesis.pdf

 - do you think it would be a good idea to add them to
   Control.Monad.Cont?
I think so, because they simplify the use of continuations in an
imperative setting and are probably helpful in understanding
continuations. Letting continuations escape is quite a common pattern
in scheme code, and painful to do in Haskell without your cool trick.
I'd also like to have shift and reset functions there :)


Best wishes,

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] best way to do generic programming?

2005-07-01 Thread Thomas Jäger
Arka,

as you already mentioned, you want to have a look at the Scrap your
Boilerplate approach.

 import Data.Generics
 ...
 data Expr = Const Int | Var String | Add Expr Expr deriving (Typeable, Data)

will derive the necessary Data instance and allow you to define

 optimizeDeep :: Data a = a - a
 optimizeDeep = everywhere (mkT optimize)

On 7/1/05, Arka al Eel [EMAIL PROTECTED] wrote:
 Hi,
 
 I'm playing with generic programming. At the moment I'm interested in
 reusable transformations on data types. Provided for example a toy
 datatype Expr like this:
 
 data Expr = Const Int | Var String | Add Expr Expr
 
 Plus a function optimize that optimizes a pattern x + 0 into x:
 
 optimize (Add x (Const 0)) = x
 
 You would now want this to be this generic, so the function should be
 recursive for all other constructors *and* other data types. For
 example, suppose that Expr is included in other datatype:
 
 data Stm = Assign String Expr | Seq Stm Stm
 
 I now want the optimize transformation to work on Stm, like
 this:
 
 x = optimize (Seq (Assign (Add (Var x) (Const 0))) blah blah)


 with a sensible solution in an hour, while I can do this in two
 minutes in Scheme. After all, writing compilers is supposed to be a
 *strong* point of Haskell. Real world is knocking on your door, guys!


Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why I Love Haskell In One Simple Example

2005-06-28 Thread Thomas Jäger
Hi Mads,

Since ghc-6.4 there's another feature that is enabled by such explicit
foralls in type signatures, namely scoped type variables. Consider
 foo :: Num a = a - a - a
 foo x y = x + z where
   z = 2 * y
Now since adding type signatures is a good thing, you want to give z
an explicit type signature. But |z :: a| fails with an Inferred type
is less polymorphic than expected error because the |a| actually
means |forall a. a| and not the |a| from foo's type signature. The
classical way (still, it's an extension implemented in hugs and all
versions of ghc i'm aware of) to bring the in scope by binding it like
this:
 foo (x :: a) y = x + y where
This is fine in such simple examples, but often gets tedious and
clutters up the definition by weird type annotations. Therefore
ghc-6.4 implements the great feature that the |a| from foo's type
signature is automatically brought into scope if you explicitely
quantify it with a forall.

Aside: A small disadvantage seems to be that you can only scope over
either all or none of the type variables in your signature. However,
 foo :: forall a. Num a = (forall b. ...)
will bring the variable a into scope, but not b and is otherwise equivalent to
 foo :: forall a b. Num a = ...

On 6/27/05, Mads Lindstrøm [EMAIL PROTECTED] wrote:
snip
 I had newer seen anybody use forall a. in function signatures before,
 and therefore was curious about its effect. This is probably do to my
 inexperience regarding Haskell. However, I tried to remove it and wrote
 this instead:

 test :: (Num a) = a

 and the code still compiled and seems to run fine. Also using the
 prettyShow and rpnShow functions. So, why are you using the forall
 keyword? (this is not meant as a critique, i am just curious)

 I tried to find documentation about the use of the forall keyword in
 respect to functions (I do know about it in with respect to
 existentially quantified types), but with no luck. So, if anybody has
 some good pointers, please let med know about it.
A great recource is
http://haskell.org/ghc/docs/latest/html/users_guide/type-extensions.html
Bookmark this page and come back to it every once in a while - there
are just so many treasures in it - one of my favorites is 7.4.12.
Generalised derived instances for newtypes

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] implicit parameters THANK YOU!

2005-03-22 Thread Thomas Jäger
On Mon, 21 Mar 2005 20:29:35 -0500 (Eastern Standard Time), S.
Alexander Jacobson [EMAIL PROTECTED] wrote:
 I just discovered implicit parameters.  To everyone involved with
 making them, THANK YOU.  They are blazingly useful/powerful for server
 handler style libraries where you want to make a veriety of local
 environment information available to the handlers without burdening
 the handlers with a big dictionary object to carry around.  FANTASTIC.
I like to think of implicit parameters as a direct-style reader monad.
Therefore, they can be used interchangably with reader monads (or with
explicit passing of the parameter, for that matter). Which one you
choose is of course a matter of taste, but personally, I prefer the
monadic approach, since it is easier to extend (maybe you later
discover that you really needed state) and it is the usual (and
portable) Haskell solution. Furthermore, because `(-) r' already is a
reader monad, the code can often be kept very consice.

The situation is different if you've already written some code and
realize you need an additional parameter in a couple of functions.
Monadifying all that code (or explicitely threading the parameter) is
usually a lot of trouble and the change might be only experimental
anyway, so with implicit parameters, you can just change a few
signatures and be done.

 That being said, they so powerful they are proabably easy to abuse.
 Could those experienced with this feature provide warnings about
 possible problems with overuse?
I've only used them sparsely and I think that's the way to go. Also,
you should be aware of a few common problems:

1. Recursive functions
http://www.haskell.org/pipermail/haskell-cafe/2005-January/008571.html
This is not surprising if you consider how type inference for
recursive functions works, but it is obviously the wrong thing to do.
Personally, I'd be happy if (mutually) recursive functions using
implicit parameters without a type signature were rejected, because to
do it correctly, some sort of polymorphic recursion is necessary.

2. The monomorphism restriction
Consider
 a :: Int
 a = let ?foo = 0 in b where
   b :: (?foo :: Int) = Int
   b = let ?foo = 1 in c where 
 c = ?foo  
The meaning of this code depends on the flag
-f(no)-monomorphism-restriction since with the monomorphism turned on,
`c' gets the monomorphic type `Int', and the `?foo' in the definition
of `c' refers to the implicit parameter of `b', so `a' evaluates to
`0'. On the other hand, without the monomorphism restriction, the type
of `c' becomes `(?foo :: Int) = Int', and it is easy to see that `a'
evaluates to `0'.
The fact that the meaning depends on the type signature actually isn't
that bad; after all, in explicit monadic code, you would have to make
the same choice. The interaction with the monomorphism restriction,
however, seems very unfortunate.

Btw, to explicitely type a declaration in a let binding, the style
let x :: a = b isn't enough, it needs to be let x :: a; x = b.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] implicit parameters THANK YOU!

2005-03-22 Thread Thomas Jäger
Hello again,

Sorry, I made a little mistake.

  a :: Int
  a = let ?foo = 0 in b where
b :: (?foo :: Int) = Int
b = let ?foo = 1 in c where
  c = ?foo
 The meaning of this code depends on the flag
 -f(no)-monomorphism-restriction since with the monomorphism turned on,
 `c' gets the monomorphic type `Int', and the `?foo' in the definition
 of `c' refers to the implicit parameter of `b', so `a' evaluates to
 `0'. On the other hand, without the monomorphism restriction, the type
 of `c' becomes `(?foo :: Int) = Int', and it is easy to see that `a'
 evaluates to `0'.
In this case, `a' of course evaluates to `1'.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Point-free style

2005-02-14 Thread Thomas Jäger
On Mon, 14 Feb 2005 11:07:48 +0100, Daniel Fischer  And could one define
 \f g h x y - f (g x) (h y)
 
 point-free?
sure,
((flip . ((.) .)) .) . (.)

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Point-free style

2005-02-14 Thread Thomas Jäger
Hi,

On Mon, 14 Feb 2005 14:40:56 +0100, Daniel Fischer wrote:
   \f g h x y - f (g x) (h y)
  ((flip . ((.) .)) .) . (.)
 
 Cool!
 
 But I must say, I find the pointed version easier to read (and define).
It certainly is. In fact, I transformed it automatically using a toy
lambdabot plugin, i've recently been writing.

 So back to the question before this one, is there a definite advantage of
 point-free style?
 
 I tend to use semi-point-free style, but I might be argued away from that.
Yes, me too. I think obscure point-free style should only be used if a
type signature makes it obvious what is going on. Occasionally, the
obscure style is useful, though, if it is clear there is exactly one
function with a specific type, but tiresome to work out the details
using lambda expressions. For example to define a map function for the
continuation monad
 cmap :: (a - b) - Cont r a - Cont r b
One knows that it must look like
 cmap f = Cont . foo . runCont
where foo is some twisted composition with f, so successively trying
the usual suspects ((f.).), ((.f).), ... will finally lead to the only
type-checking and thus correct version (.(.f)), even though I can't
tell what exactly that does without looking at the type or
eta-expanding it.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Point-free style

2005-02-14 Thread Thomas Jäger
On Mon, 14 Feb 2005 16:46:17 +0100, Lennart Augustsson
[EMAIL PROTECTED] wrote:
 Remi Turk wrote:
  import Control.Monad.Reader
 
  k :: a - b - a
  k = return
 
  s :: (a - r - b) - (a - r) - a - b
  s = flip (=) . flip
This can be even written as s = ap.

 It can be done without importing anything.
 (Except the implicit Prelude import, of course.)
It can, but is it possible to do it much easier than
 s' = flip flip (span ((0 ==) . fst) . zip [0..] . repeat) . ((.) .) . (id .) 
 . (uncurry .) . 
  flip ((.) . flip (.) . (. (snd . head))) . (. (snd . head))
?

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Things to avoid (Was: Top 20 ``things'' to know in Haskell)

2005-02-11 Thread Thomas Jäger
Hi,

On Thu, 10 Feb 2005 16:18:19 -0800, Iavor Diatchki
[EMAIL PROTECTED] wrote:
  because I don't like the current situation with (n+k)-patterns:
  Everybody says they're evil, but hardly anybody can explain why he
  thinks so.
 
 I think 'evil' may be a little too strong.  I think the usual argument
 against 'n+k' patterns is that:
 i) they are a very special case, and may be confusing as they make it
 look as if '+' was a constructor, which it is not
agreed

 ii) they lead to some weird syntactic complications, e.g.
 x + 3 = 5 defines a function called '+', while (x + 3) = 5 defines a
 variable 'x' to be equal to 2.
 and there is other weirdness like:
 x + 2 : xs = ...
 does this define '+' or ('x' and 'xs')?  i think it is '+'.  
IMO, that's not a big problem, because if ambigouties arise, only one
of the possible meanings will compile (e.g. if you use + somewhere
else in the module, ghc will complain about an ambigous occurrence of
`+'). All (rather strange) other cases are caught by ghc -Wall.

I found another disadvantage:
iii) As a side effects of how n+k patterns work, each instance of the
Num class must also be an instance of Eq, which of course doesn't make
sense for all numeric types.

 anyways
 when used as intended 'n+k' are cute.   it is not clear if the
 complications in the language specification and implementaions are
 worth the trouble though.
It's true that their functionality can be easily expressed without them.

I like to see them (well, n+1 patterns) as a special case of views
because they allow numbers to be matched against something that is not
a constructor and involve a computation on pattern matching. An
unambigous replacement using views could look somewhat like
 foo Zero  = 1
 foo (Succ n) = 2 * foo n

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Things to avoid (Was: Top 20 ``things'' to know in Haskell)

2005-02-10 Thread Thomas Jäger
  Is there also a Wiki page about things you should avoid?
 
 Since I couldn't find one, I started one on my own:
 
 http://www.haskell.org/hawiki/ThingsToAvoid
 
 I consider 'length', guards and proper recursion anchors.

[Moving the discussion from the wiki to the mailing list until we've
reached some kind of consensus]

ad n+k-patterns
This old discussion seems kind of relevant.
http://www.dcs.gla.ac.uk/mail-www/haskell/msg01131.html

In my opinion, there's no reason to avoid (n+1)-patterns. Recursion is
the natural definition of many functions on the natural numbers, just
like on lists, trees or any other ADT. There just happens to be a more
efficient representation of naturals than as Peano numbers. There are
indeed circumstances where
 foo (n+1) = ... n ... n ... n ...
is much clearer than
 foo n = let n' = n + 1 in n' `seq` ... n' ... n' ... n' ...
On the wiki, you claim
 data Natural = Zero | Successor Natural
 They are implemented using binary numbers and it is not even tried to 
 simulate the behaviour of Natural (e.g. laziness). Thus I wouldn't state, 
 that 3  matches the pattern 2+1.
If however, you had defined
 data Nat = Zero | Succ !Nat,
pattern matching would still be possible, but Nat would behave exactly
as the (n+1) - pattern.

ad guards
I agree that guards are not always the right way to do it (as in the
example you mentioned on the wiki which was bad Haskell code anyway).
However, they have valid uses that can't be easily/naturally expressed
without them. A typical example might be
 foo (App e1 e2) | e1 `myGuard` e2 = App (foo e1) (foo e2)
 foo (Lambda v e) = Lambda v (foo e)
 foo (App e1 e2) = App (bar e1) (bar e2)
 ... 

So instead of saying guards are bad, i think there should rather be
an explanation when guards are appropriate.

Altogether, the spirit of the page seems to be use as little
syntactic sugar as possible which maybe appropriate if it is aimed at
newbies, who often overuse syntactic sugar (do-notation). However, I
like most of the syntactic sugar provided by Haskell/Ghc, and it is
one reason why Haskell is such nice language, so I don't think we
should advocate unsugaring all our programs.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Things to avoid (Was: Top 20 ``things'' to know in Haskell)

2005-02-10 Thread Thomas Jäger
On Thu, 10 Feb 2005 12:50:16 +0100 (MET), Henning Thielemann
[EMAIL PROTECTED] wrote:
 
 On Thu, 10 Feb 2005, [ISO-8859-1] Thomas Jäger wrote:
 
  Altogether, the spirit of the page seems to be use as little
  syntactic sugar as possible which maybe appropriate if it is aimed at
  newbies, who often overuse syntactic sugar (do-notation).
 
 This overuse is what I observed and what I like to reduce. There are many
 people advocating Haskell just because of the sugar, which let interested
 people fail to see what's essential for Haskell. When someone says to me
 that there is a new language which I should know of because it supports
 definition of infix operators and list comprehension, I shake my head and
 wonder why he don't simply stick to Perl, Python, C++ or whatever.
I don't believe that Haskell advocacy usually happens on such a
superficial level, in fact most users of curly-braced languages hate
Haskell's syntax, so that won't be an argument for Haskell anyway.
Looking at it closer, syntax often makes a huge difference. Haskell is
an many ways similar to mathematical notation, which allows to express
complicated concepts in a concise way and happens to use a lot of
syntactic sugar. There should be no doubt about that 1 + 2 + 3 is
easier for humans to parse than (+ (+ 1 2)).
This becomes especially important when you are embedding a domain
specific language into Haskell. Allowing combinators to be used infix
make code more readable, better understandable, reduces parenthesis,
and sometimes resolves the question in which order the arguments of
the functions appear. It's not strictly necessary, but is a big
advantage over postfix-languages.

 What I forgot: Each new syntactic sugar is something more, a reader must
 know, a compiler and debugger developer must implement and test, a source
 code formatter, highlighter, documentation extractor or code transformer
 must respect. We should try harder to reduce these extensions rather than
 inventing new ones.  Leave the award for the most complicated syntax for
 C++! :-]
Ideally, new syntactic sugar is self-explanatory, and this is the case
for most of Haskell's sugar (maybe in contrast to C++). The fact that
some tools get a little more complicated doesn't bother me much if it
helps me write my program in a more concise way.

 That's why I want to stress that the syntactic sugar is much less
 important or even necessary than generally believed. I hope that the
 examples clarify that.
Yeah, as long as it is explained and clearly marked as an opinion (as
it is now), that's ok. One reason that I got so excited about that is
because I don't like the current situation with (n+k)-patterns:
Everybody says they're evil, but hardly anybody can explain why he
thinks so.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe