Prelude Hacking

1993-09-20 Thread Andy Gill


Talking about the Haskell Prelude ...

IMHO as a Haskell user, one big problem with Haskell is the Prelude.
Its too specific in places, making it too big. Take for example the
lex function or the Complex numbers.  However in other places it lacks
in completeness. This is addressed with a (different) Pandora's box of
add-ons supplied with each of the Haskell implementations. The whole
area feels messy.

What about taking a leaf from the SML community ? They now have
(effectively) a two level prelude.

  1. The Base
  2. The Edinburgh ML Library

(I know it took them some time to get this far, but we can get it right :-)

Drawing from this pattern we *could* have
  1. A very small kernel, aka PreludeBuiltin, PreludeCore.

  2. A well defined *larger* set of other modules the user can import,
 like: Array, Text, IO, etc.

 With this we have a larger (and standard) second level base.

Example: Maybe !

Take the Maybe datatype. In all my programs I use it, so I need to
define it. But I want my program to run in standard haskell, so its
time to use #define :-( I could define it in the script like any
other data structure, but this defeats the point of our compiler
providing such things as add-ons to our compilers in the first place.

We would all hesitate with adding more and more to our prelude, but
adding a new (standard) module that we could all import would not
raise many eyebrows. A 2-level prelude would be one way of doing this.
It also provides an forum for people to add new libraries to Haskell
(like regular expressions, for example)

The difficult thing for the committee to decide is how to split up the
prelude ++ extras into a modular and extendible bundle. Coming from
the 'C' programming community, adding the line

 import IO

to my modules that actually do IO does not cause a problem :-)

Just some observations...

Andy Gill, Glasgow.




Re: the hbc parsing library

1993-09-20 Thread Lennart Augustsson



 This is a rather belated summary of the replies to my earlier query
 about the library of parsing combinators which comes with the hbc
 compiler.

OK, I'll try again.  My last reply bounced.  The parsing library you
have got was just an experiment.  It should never have been released.
A much improved one will be out any day now.  Just ignore what you have
got and mail me if you want the new one.

-- Lennart




Re: the hbc parsing library

1993-09-20 Thread balan



This is a rather belated summary of the replies to my earlier query
about the library of parsing combinators which comes with the hbc
compiler. It is based largely on advice from Alastair Reid, Stephen J
Bevan and Ken Sailor; thanks to all of them for helping me out.
(Incidentally, if anyone who wants to use the library does not
understand Parsing combinators I recommend Graham Hutton's article,
"Parsing using combinators", which appears in the proceedings of the
1989 Glasgow functional programming workshop. This has been published
by Springer, in their Workshops in Computing Science series; the
editors are Kei Davis and John Hughes.)

My question was about the class $Token$, whose signature is:

class (Text a) = Token a where {
compareT :: a - a - OrderedT;
stringT :: a - String;
positionT :: a - String;
eqT :: a - a - Bool
}

I wasn't sure of the intended meaning of this class or of how it is
used in the definitions of the library functions. Now that I've seen
a few replies to my query and done a small experiment I've come to the
conclusion that the library uses $eqT$ to test for equality but
doesn't use any of the other methods. This means that it is possible
to to define tokens which store any amount of extra information (such
as, for example, their position --- this would be of use in testing
for offside-ness) because we can control the way in which the parsing
combinators look at two tokens to decide whether or not they are
lexically equivalent.

Here's a short(ish) example of an instantiation of the $Token$ class. It
defines a type $LexToken$; it is intended that a string of LexTokens
will be produced by a lexer and then fed into a parser. Of course, in
the parser we are only interested in the class of a token, rather than
its actual value --- therefore, the equality test "hides" the extara
information.



module LexToken(LexToken(..), lexer, ParseLib..) where

import ParseLib

data LexToken  = Vname String
   | Integer String
   | Nop0 String
   | Nop1 String
   | Lop1 String
   | Lop2 String
   | Rop1 String
   | Rop2 String
   | LPar String
   | RPar String
   deriving (Eq, Text)





instance Token LexToken where

   compareT x y= UnT

   eqT (Vname _) (Vname _) = True
   eqT (Integer _) (Integer _) = True
   eqT (Nop0 _) (Nop0 _)   = True
   eqT (Nop1 _) (Nop1 _)   = True
   eqT (Lop1 _) (Lop2 _)   = True
   eqT (Rop1 _) (Rop1 _)   = True
   eqT (Rop2 _) (Rop2 _)   = True
   eqT (LPar _) (LPar _)   = True
   eqT (RPar _) (RPar _)   = True
   eqT _ _ = False -- watch out for this

   stringT (Vname x)   = x
   stringT (Integer x) = x
   stringT (Nop0 x)= x
   stringT (Nop1 x)= x
   stringT (Lop1 x)= x
   stringT (Lop2 x)= x
   stringT (Rop1 x)= x
   stringT (Rop2 x)= x
   stringT (LPar x)= x
   stringT (RPar x)= x

   positionT x = ""


$positionT$ may be used to return the "extra information". The general
opinion amongst my advisors is that it returns a string because that
is the most flexible return type (remember that since we can derive a
Text instance for any algebraic type it is easy to convert values to
and from strings). In the declaration above, $positionT$ just returns
the empty string; the selector $stringT$ seems more appropriate for
conveying the only extra information which we have.


That's it . . . sorry for the delay, and thanks again to all the
people who helped me out.


balan




Why does one work and not the other?

1993-09-20 Thread Van Snyder


In the following little program, why does the definition of p, and the second
definition of q, work, but the first does not?  Do I not understand Haskell,
or is this a feature of ghc?

Van Snyder = [EMAIL PROTECTED]


module Main (main) where

main = appendChan stdout ((show p)++"\n") exit d
d= appendChan stdout ((show q)++"\n") exit done

p :: (Integral d) = Array d (Array d d)
p = array (1,2) [1 := array (1,3) [j := j | j - [1..3]],
 2 := p!1 ]
{-
q :: (Integral d) = Array d (Array d d)
q = array (1,2) [1 := array (1,3) [j := j | j - [1..3]]] ++
[2 := q!1]
-}
q :: (Integral d) = Array d (Array d d)
q = array (1,2) [i := if i == 1 then array(1,3) [j := j | j - [1..3]]
  else q!(i-1) | i - [1..2]]




Defining Haskell 1.3 - Committee volunteers wanted

1993-09-20 Thread Brian Boutel



Joe Fasel, John Peterson and I met recently to discuss the next step in
the evolution of Haskell.

While there are some big issues up ahead, (adding Gofer-like constructor
classes, for example), these should be considered for the next major
revision, Haskell 2.0.

For now, we want to be less ambitious, and produce a definition of
Haskell 1.3.

Topics on the agenda include:

Monadic IO
Strict data constructors
Records (naming field components)
Prelude hacking
Standardizing annotation syntax

We think the best way to proceed is to call for volunteers to form 
a new committee to do the work on this.

So, who's interested?

--brian





Re: Why does one work and not the other?

1993-09-20 Thread jones-mark


Van Snyder (= [EMAIL PROTECTED]) asks why the following definition
doesn't work:

|  q :: (Integral d) = Array d (Array d d)
|  q = array (1,2) [1 := array (1,3) [j := j | j - [1..3]]] ++
|  [2 := q!1]

The reason is that it is missing some parentheses.  Using the rule
that function application always binds tighter than any infix operator,
the right hand side of the definition is parsed as:

 (array (1,2) [1 := array (1,3) [j := j | j - [1..3]]]) ++ [2 := q!1]

But then the first argument to ++ is an array, not a list, so this is a
type error.  Writing parens around the two lists should solve the
problem:

  q = array (1,2) ([1 := array (1,3) [j := j | j - [1..3]]] ++
   [2 := q!1])

Hope that helps!
Mark