Martin Odersky writes: >Well, parametric type classes can give you a type-systematic basis for >doing this. Using some of the notation of our Lisp and FP '92 paper, >you could define: > > class f :: Function a b where -- the class of functions > ($) :: f -> a -> b -- 'apply' is expressed as infix '$'. > > inst a -> b :: Function a b where -- '->' is the classical instance type > f $ x = f x > > class (f :: Function a b, a :: Ix) => f :: FiniteFunction a b where > dom: f -> (a, a) > -- the class of finite functions > -- (with rectangular domains), > -- defined as a subclass of Function > > inst Array a b :: FiniteFunction a b where > arr $ x = arr ! x -- an instance of a finite function > dom arr = bounds arr >You might object that all we did was exchange one awkward symbol for >indexing (!) by another ($). To be able to use juxtaposition for >array indexing, we need to get a "handle" on juxtaposition. One way >to do that would be by decreeing: > f x is syntactic sugar for f$x . >In that case we need to adapt the instance declaration of arrow: > > inst a -> b :: Function a b where > f $ x = primApply f x >And we are done. I like this. I like this very much indeed. I like it even more since it seems to open up the door for arbitrary functional representations. This is very important indeed, in the engineering world. >However, I am not sure we want to do that, since principal types >become very unwieldy. For instance, if composition were defined as >usual > (f . g) x = f (g x) , >its principal type would be > (ft :: Function b c, gt :: Function a b) => ft -> gt -> a -> c. This seems to me a very survivable cost. Of course, I am biased; my application area needs this feature a lot (which means I can include it even if Haskell as a whole does not). >[Aside: This is another example where multiple argument type classes >lead to ambiguities, but parametric type classes don't.] Pardon a dumb question, but could I request a clarification here? Is the equivalent multiple argument type class definition ambiguous? >The principal type is what it should be, since each of f and g can be >an array instead of a function (or it can be some other instance of >class Function). You see, the cost of generality is increased >complexity of many principal type expressions. I am not sure whether >this cost is worth the gains (it might be). On the other hand, having >an explicit overloaded infix operator for application does not seem to >complicate anything. I submit that this is worth while for the added generality of arbitrary function representations. While I do, so help me, admit my bias here, it seems to me that there are lots of cases where we want to be allowed to represent functions by other means. I would think that this would be of real interest to the functional data base folks. If we look at data bases as collections of functions, the ability to represent an arbitrary functional relationship as an actual function, which can be called and evaluated, seems of tremendous utility (at least to my poor brain). In a separate message (combined with this one for space), Ken Sailor writes: >In general, although the set theoretic definition of a function is a >set of ordered pairs, it is convenient to think of a function as a >rule. That is, given an input, the rule tells how to compute the >output. It may be that for some special functions, we want to encode >the rule as a lookup on some table, but in fact this technique is not >generally applicable. Why store a gazillion values for + when we can >implement the rule in hardware? I had no thought of replacing functions with arrays, merely allowing arrays to be interpreted as functions. >In the functional world, nobody has figured out how to get all the >practical benefits of imperative arrays, for a variety of reasons. >Until these benefits are achieved, functional languages will not be >practical for most, if not all, scientific programming applications. >(apologies to the sisal community in advance -- my own interest is in >lazy functional languages like Haskell, where arrays are in greater >disorder). My impression was that, while the full advantages concerning replacing individual members of an array was a problem in lazy functional languages, no one had any problem with duplicating constant time access. >It entirely misses the point of why arrays are hard to say that there >are no differences between arrays and general functions. In theory, >arrays are a snap. You can have them in a variety of interesting >forms, but they don't come with the performance guarantees they have >in the imperative world, and that is the problem of arrays in a >nutshell. I was not suggesting a solution for the problems with arrays in functional languages. I was suggesting that various kinds of conceptual problems (including the simple one of syntax) become much easier when arrays are regarded as another mechanism for defining a function. >And finally, it makes sense to have separate syntax for arrays and >general functions, because different behavior is expected for the two. Here, I may be exposing my cluelessness, but this seems a (search for a better word --- none found) silly statement. There are many cases where we want different behavior to be expressed by the same syntax. Type clases exist in order to allow different behavior with the same syntax. Using the same syntax for different behavior has tremendous benefits in some cases, otherwise a large portion of modern languages --- functional and imperative --- would be irrelevant. I am simply arguing that arrays and functions are one of those cases. I omit comments on the following articles, since I believe my comments above apply to them. Dave Barton [EMAIL PROTECTED]