Re: Lifted functions

1993-11-04 Thread wadler


  * My taste is that isomorphic possibly-abstract types should get a new kind
   of type declaration (newtype, isotype, or whatever), perhaps defined to be
   exactly equivalent to a algebraic data type with just one strict
   constructor.  But I don't feel strongly about this.
   
   (At least we appear to be agreed on the idea that there should
   be an explicit constructor.)

Our tastes agree here.
   
   * I'd misunderstood Phil's position about liftedness.  He advocates
- unlifted functions
- lifted algebraic data types (all of them)
- unlifted tuples (just tuples, not all single-constr types)

Not quite.  I advocate separate declarations for lifted constructors
and unlifted constructors.  For lifted constructors, we have the
traditional `data' declaration, exactly as before.  For unlifted
constructors we have

newtype  T a_1 ... a_k  =  C t_1 ... t_n

where the a's are type variables, the t's are types, T is the
new type name, and C is the name of the single constructor.  (It's
semantically impossible to have an unlifted sum type, which is why
there's only one constructor.)

Note that `newtype' yields Simon's isomorphic type declarion as a
special case when n=1.

I furthermore advocate that tuples be unlifted, i.e., defined by
`newtype' rather than `data'.  (I'm not especially enamored of the name
`newtype'; any better suggestions?)
   
   I have never, never been tripped up by the liftedness of tuples, but the
   argument that ``we are prepared to pay for laziness so why not this too''
   has a certain masochistic charm.  I'll try the effect on performance of
   making all tuple-matching lazy in the nofib suite.

Thanks Simon!
   
   * Phil remarks that pattern-matching in let/where is implicitly twiddled. 
   That's true, and it is inconsistent; but while we might fix it for tuples,
   we can't fix it for general algebraic data types --- so is it important to
   fix it for tuples...?
   
Fair point.  The only way to make the correspondence perfect is to ban
lifted contructors in `let' and `where', allowing only unlifted constructors.
This yields a neater language, but is probably too radical a change for
Haskell 1.3.

Cheers,  -- P




Writing Haskell RPC servers

1993-11-04 Thread Andre Blavier


I have written a Haskell library that enables you to write network
servers in Haskell, using the RPC (Remote Procedure Call) protocol.
Currently, the library works under HBC only (it uses the Native class).

Servers written with the RPCServer library are run through the inetd(8C)
daemon.  All it takes to write a Haskell server is to implement functions
that get their arguments and return their value through a XDR (eXternal
Data Representation) layer.

The primary motivation for writing this library was to make Haskell
programs able to communicate with C programs and exchange complex data
structures with them. As a side effect, the library brings the power of
client-server capabilities to the Haskell world.

Only some basic acquaintance with RPC and XDR is required for using the
library.

The RPCServer library is currently used at our organization in the development
of an application involving code written in C++, Smalltalk and Haskell.
Programs written in these languages communicate through RPC.

I've put the library on ftp.cs.chalmers.se (/pub/haskell/incoming/RPC.tar.Z).

Comments and suggestions are welcome.

__
   Andre BLAVIER
  Departement RD
Informatique CDC   E-Mail: [EMAIL PROTECTED]
113 rue Jean Marin Naudin  Tel: (+33) (1) 46 63 92 97
92220 Bagneux - France Fax: (+33) (1) 46 63 98 85
__




Re: Laws

1993-11-04 Thread jones-mark


The bottom line:  Equational reasoning in a typed language needs to
  take account of types.

For those who would like me to go into a little more detail:

Warren has described a puzzling situation in which the two similarly
defined functions:

  f1 x y = const (show x == show y) w
   where
   w = x ++ y ++ ""

  f2 x y = const (show x == show y) w
   where
   w = (x ++ [5], y ++ "")

give different results,  f1 [] [] = True,  f2 [] [] = False.  What is
really puzzling is that it doesn't look as though the results of each
function should depend on the definition of w, which is the only thing
that varies between the two definitions ... ?

The trick is that, although the value of w is not actually used in the
result of either function, the two different definitions do have a
significant effect on the types of the two functions:

  f1 :: [Char] - [Char] - Bool

  f2 :: [Int] - [Char] - Bool

Now it should be obvious that the two functions are not equivalent; they
don't even have the same type.  The reason that they actually give
different results is because, as a special case, the empty string is
`show'n as "", while the empty list of integers (as any other empty list)
is `show'n as [].  A similar effect could have been achieved by using
explicit type declarations:

  f1:: [Char] - [Char] - Bool
  f1 x y = show x == show y

  f2:: [Int] - [Char] - Bool
  f2 x y = show x == show y

The moral of the story seems to be that, if you are doing equational
reasoning in a typed language, then you really need to take account of
the types.  This is particularly true when overloading is involved, but
it is still necessary even if we're restricted to pure polymorphism.
For example, I doubt that very many people would be happy with a proof
that used the law:

  tail [1] = [] = tail "1".

In other words, when we write  e1 = e2,  we really want to be sure that
e1 and e2 have the same type.  Likewise, when we write a law like:

   map id = id,

we really ought to acknowledge that the two identity functions used here
are not the same; if the one on the left has type a - a, then the one on
the right has type [a] - [a].

Come to think of it, this argument makes me feel somewhat happier about
the fact that a Haskell type system will reject an expression like []==[].
After all, if we write  [] = []  without saying what the types of each side
is supposed to be, then we have no way of knowing whether the equation even
makes sense.  (Of course, Haskell's reasons for complaining about []==[]
are a little different, and I wasn't really that unhappy about it in the
first place ... never mind!)

All the best,
Mark




Re: Laws

1993-11-04 Thread Joe Fasel


Just a small addendum to Mark's response to Warren:

Overloading (even just polymorphism, as Mark says) does compromise
equational reasoning, in much the same way that lexical scoping does.
That is   x = y  |-  f x = f y  , provided it's the same x and the same y.

Cheers,
--Joe




Lifted functions

1993-11-04 Thread Simon L Peyton Jones



I think we should be pretty cautious about jumping in with lifed function
spaces.  I have come up with two distinct unintended effects.  While neither
is fatal to the idea, I don't think either obvious, and I am nervous that
others may pop out of the woodwork somewhere down the road (to mix
metaphors).

So far I am unconvinced about lifted function spaces.  I'd still buy the
Data class idea.  Remember, you don't have to implement it with all the glory 
of dictionaries etc.  You can implement seq etc just as you do now!

A few other brief rejoinders to things people have said

* My taste is that isomorphic possibly-abstract types should get a new kind
of type declaration (newtype, isotype, or whatever), perhaps defined to be
exactly equivalent to a algebraic data type with just one strict
constructor.  But I don't feel strongly about this.

(At least we appear to be agreed on the idea that there should
be an explicit constructor.)

* I'd misunderstood Phil's position about liftedness.  He advocates
- unlifted functions
- lifted algebraic data types (all of them)
- unlifted tuples (just tuples, not all single-constr types)

I have never, never been tripped up by the liftedness of tuples, but the
argument that ``we are prepared to pay for laziness so why not this too''
has a certain masochistic charm.  I'll try the effect on performance of
making all tuple-matching lazy in the nofib suite.

* Phil remarks that pattern-matching in let/where is implicitly twiddled. 
That's true, and it is inconsistent; but while we might fix it for tuples,
we can't fix it for general algebraic data types --- so is it important to
fix it for tuples...?

Simon


Effect 1: strictness becomes less straightforward
~
Consider
f x y = x+y

Quick!  Is this strict in x?  Would your answer be any
different if I wrote

f x = \y - x+y
or
f = \xy - x+y

(I hope not.)

The point is this.  We would usually say that f is strict iff
f _|_ = _|_
But, if function space is lifted, then f _|_ is
certainly not _|_!  So is f strict or not?  Blargh.

At the cost of extra complexity we can fix this.  We have to run off to our
strictness analysis abstract interpreters, and fix up the rule which
currently says

apply fun Bot | isStrict fun = Bot

Instead we have to return some abstract value which ``remembers'' that
when it has eaten up another argument it should give Bot, but which meanwhile
is definitely not Bot.

I can't say I relish this.

Effect 2: useful transformations become invalid
~~~
It's already been pointed out that the eta-rule becomes invalid if
function spaces are lifed.  I didn't use to think that was important till
I came across this example:

putString (c:cs) = putChar c `thenIO_` putString cs

Once we unfold thenIO_ ,and make pattern-matching explicit,
we get

  putString = \cs - case cs of
 (c:cs) - \s - case (putChar c) of
  (_,s') - putString cs s

Never mind the details, just focus on that \s inside the branch of
the case.  I'd *like* now to transform to

  putString = \cs s - case cs of
(c:cs) - case (putChar c) of
   (_,s') - putString cs s

That is, I'd like to float the \s outside the case.  Currently
I'm allowed to do that, and it is advanatageous to do so because
it brings the two \'s together.  (I'll elaborate for anyone who's interested.)
But with lifted function spaces this transformation is Wrong.

What upsets me about this example is that the sort of inner loop which
appears a lot in our I/O and array-manipulation system, so I'm reluctant to 
take a performance hit there.   





Re: Lifted functions

1993-11-04 Thread Joe Fasel


Simon writes
| I have never, never been tripped up by the liftedness of tuples, but the
| argument that ``we are prepared to pay for laziness so why not this too''
| has a certain masochistic charm.  I'll try the effect on performance of
| making all tuple-matching lazy in the nofib suite.

Good idea, but in terms of being tripped up by lifted tuples.  Wouldn't
you think it would be good never to have to twiddle a tuple pattern?

|   putString = \cs - case cs of
|(c:cs) - \s - case (putChar c) of
| (_,s') - putString cs s
|
| Never mind the details, just focus on that \s inside the branch of
| the case.  I'd *like* now to transform to
|
|   putString = \cs s - case cs of
|   (c:cs) - case (putChar c) of
|  (_,s') - putString cs s
|
| That is, I'd like to float the \s outside the case.  Currently
| I'm allowed to do that, and it is advanatageous to do so because
| it brings the two \'s together.  (I'll elaborate for anyone who's interested.)
| But with lifted function spaces this transformation is Wrong.
|
| What upsets me about this example is that the sort of inner loop which
| appears a lot in our I/O and array-manipulation system, so I'm reluctant to 
| take a performance hit there. 

Great!  Now, is there a similar argument about transformations and lifted
tuples?  Here's at least one example (which I've probably mentioned on
this list before):  In a comment in PreludeList, I said

span p xs  =  (takeWhile p xs, dropWhile p xs)

In fact, this is false, because when xs diverges, the left side is _|_,
but the right side is (_|_,_|_).  This represents an important transformation,
known in the imperative world as loop fusion.  It would be nice if it
were valid.

Here's another way of looking at this issue:  An imperative routine can
easily have multiple effects, whereas a function (for syntactic reasons,
essentially) must return a tuple to have multiple results.  It's important
that this artificial packaging of values not have any cost.  If we insist
that a tuple has some meaning beyond the meanings of its components, we
run the risk of paying for that distinction.  (I think this is what Arvind
was referring to when he mentioned the importance of tuple elimination in
the Id compiler.)  I think this is rather like the problems Simon uncovered,
arising from considering a function to be anything more than its extensional
behavior.

--Joe