As part of thinking about interop, I've been reviewing the BitC stance on
object-oriented programming. Object-oriented programming in the style of
C++/Java/C# remains, in my opinion, the wrong model. That said, it's a
popular model, and certainly one that is not going away anytime soon.
While I would be very reluctant to twist the BitC type system to accomodate
OO, I see no reason to stand on a pedastle and exclude OO if the type system
can actually handle it. Actually, I'ld be willing to add minor enhancements
to make it work. If nothing else, we need to think about this for the sake
of code conversion.
BACKGROUND:
I don't think it is fully implemented in the current implementation,
but BitC already provides a mechanism for ad-hoc polymorphism over
structures through the built-in "has-field" constraint. This takes the form:
(has-field 'a FieldName 'b)
with the intended meaning that the type 'a has some field FieldName of type
'b. So given:
(define (f x) x.fld)
we would infer:
(forall ((has-field 'a FieldName 'b)) (fn 'a -> 'b))
HAS-FIELD can be viewed as a realization of row sub-typing expressed in
ad-hoc form through the constraint system. If we ignore issues arising from
inheritance, this is sufficient to support full polymorphism over
structures, and it apparently works just fine for structures containing
methods. If the type of x is known, then the constraints solve in the
obvious way. From:
(define (f x:S) x.fld)
we would infer:
(forall ((has-field S FieldName 'b)) (fn S -> 'b))
We can then determine syntactically that S has such a field, and we can
immediately resolve the type of 'b.
This got me to wondering: is this good enough to get us to single
inheritance for programmers who want it? The answer seems to be "almost".
Let me walk through my thinking and let's see if we can show where it
breaks:
First, note that the structure declaration and the equivalent has-field
constraints can be viewed as interchangeable. That is, given the structure
definition:
(defstruct S i:int f:loat)
the following two typings seem to be entirely equivalent and interchangeable
(please excuse the missing quantification):
S ((has-field S i int), (has-field S f float))
Indeed, the left typing implies the right typing, which is why we can remove
the constraint in the previous example.
So what strikes me here is that the main problem in getting from here to
single inheritance is virtual methods where the type of the override differs
in some way from the type of the base. Looking over the work in F-Bounded
polymorphism<http://www.cs.utexas.edu/~wcook/papers/FBound89/CookFBound89.pdf>,
it seems to me that the root problem comes from two goals that may not be
important:
- The desire to preserve subtyping across methods involving recursive
type, and
- The desire not to *lose* type information when an operation like Move
is invoked.
It seems to me that the first goal is unnecessary, and the second goal is
more than we require in order to deal with CTS. I am wondering if an ad-hoc
solution will not suffice in practice to give us what *current* OO languages
with single inheritance provide in practice.
What I am imagining is that we think of virtual functions as consisting of
an "ordered union type". That is: a sequence of types of progressively
decreasing desirability. When resolving a method call site with a call, we
proceed down this list right to left, where the rightmost type is the most
derived virtual overload that is visible given the type information at hand,
and the leftmost type is the *least* derived virtual declaration. If we find
any legal match under the usual rules, then the call is permitted.
Union types of this form are not union types in the general sense. First,
there is a resolution order yielding a particular, concrete resolution.
Second, they arise only in constrained syntactic contexts. Third, the
ordering relationship is syntactically derivable from the class
declarations, and therefore need not be inferred. A type
inference implementation for this may require back-tracking in the type
checker, but that is already required for constraints in general, so I see
no major new issue here in that respect.
Given the amount of effort that has gone into subtyping and subclassing over
the years, this seem FAR too obvious to actually work, or surely somebody
would have noticed by now.
Can anybody explain why this won't work? I grant from the outset that it's
just plain ugly, but then, that seems somehow appropriate in the context of
a discussion of OO code... :-)
shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev