Tim Sweeney:
>Is this "higher order function application" a useful notion, and does any
>research exist on the topic?

The answer to the first question is "yes, when it matches the intuition of
the programmer". Your two first examples:

>    cos+sin        -- intent: \x->((cos x)+(sin x))
>    cos(sin)       -- intent: \x->cos(sin(x))

have equivalents in Fortran 90 and HPF, although with arrays rather than
functions. For instance, one can write "A+B" to mean an array with value
A(I)+B(I) for all indices I, and A(B) for the array with elements A(B(I))
(provided B is an integer array whose elements all are valid indices for
A). This feature is widely used in array languages, where it is seen as an
intuitive and convenient notation to express array operations. I definitely
believe it could be useful also for operations over other data structures.

The answer to the second question is "surprisingly little". There is, for
instance, no formal description to be found of the Fortran 90 array
operations and how they type.  But it is quite straightforward to define
type systems and type checking algorithms for this, when the language is
explicitly typed. One example is

@InProceedings{Thatte-ScalingA,
  author =       {Satish Thatte},
  title =        {Type Inference and Implicit Scaling},
  booktitle =    {ESOP'90 -- 3rd European Symposium on Programming},
  editor =       {G. Goos and J. Hartmanis},
  number =       432,
  series =       {Lecture Notes in Computer Science},
  year =         1990,
  publisher =    {Springer-Verlag},
  address =      {Copenhagen, Denmark},
  month =        may,
  pages =        {406--420}
}

where a type system for an APL-inspired overloading in an FP-like language
is described. This approach is based on subtyping.

A student of mine is pursuing another, more direct approach, where a
coercive type system is used to resolve the overloading at compile time
through a combined rewrite and type check. He did this for an explicitly
typed variant of Core ML, and this is reported in his Licentiate thesis
("file://ftp.it.kth.se/Reports/paradis/claest-licthesis.ps.gz"):

@PHDTHESIS{claest-lic,
        AUTHOR = {Claes Thornberg},
        TITLE = {Towards Polymorphic Type Inference with Elemental Function 
Overloading},
        SCHOOL = it,
        ADDRESS = {Stockholm},
        YEAR = {1999},
        TYPE = {Licentiate thesis},
        MONTH = may,
        NOTE = {Research Report } # rep-id # {99:03}
}

@STRING{it = "Dept.\ of Teleinformatics, KTH"}

@STRING{rep-id = "TRITA-IT R "}

When the type system is implicit (inference rather than checking), however,
less is known. You can do some tricks with the Haskell class system (for
instance, defining functions between instances of Num to be instances of
Num themselves, which then lets you overload numerical operations like "+")
but this solution has some restrictions and is also likely to lead to
run-time overheads. We would like to have something better.

Finally, there is an interesting discussion of this overloading business,
for array- and data parallel languages, in

@ARTICLE{Sip-Blel-Coll-Lang,
        AUTHOR = {Jay M. Sipelstein and Guy E. Blelloch},
        TITLE = {Collection-Oriented Languages},
        JOURNAL = {Proc.\ {IEEE}},
        YEAR = {1991},
        VOLUME = {79},
        NUMBER = {4},
        PAGES = {504--523},
        MONTH = apr
}

For instance, they bring up the possible conflicts which may occur when
trying to resolve this overloading for operations over nested data
structures. (A witness is length l, where l :: [[a]]: should it be just
length l, or resolved into map length l?)

Björn Lisper

Reply via email to