with my idea of deriving a type lattice from all role definitions
the problem of subtyping signatures arises. Please help me to think
this through. Consider

role Foo
   sub blahh(Int, Int) {...}
role Bar
   sub blahh(Str) {...}
role Baz does Foo does Bar # Foo|Bar lub
  # sub blahh(Int&Str,Int?) {...}

The role Baz has to be the lub (least upper bound) of Foo and Bar.
That is the join of two nodes in the lattice. This means first of
all the sub blahh has to be present. And its signature has to be
in a subtype relation <: to :(Int,Int) and :(Str). Note that
Int <: Int&Str and Int|Str <: Int. The normal contravariant subtyping
rules for functions gives

        +--------- :> ---+
        |                |
   :(Int&Str,Int?) <: :(Int,Int)
              |              |
              +--- :> -------+

        +--------- :> ---+
        |                |
   :(Int&Str,Int?) <: :(Str)

I hope you see the contravariance :)

The question mark shall indicate an optional parameter that
allows the subtype to be applicable in both call sites that
have one or two arguments.

The choice of glb for the first parameter makes the sub in Baz
require the implementor to use the  supertype of Int and Str
which in turn allows the substitution of Int and Str arguments
which are subtypes---that is types with a larger interface.

Going the other way in the type lattice the meet Foo&Bar of the
two roles Foo and Bar is needed. But here the trick with the
optional parameter doesn't work and it is impossible to reconcile
the two signatures. This could simply mean to drop sub blahh from
the interface. But is there a better way? Perhaps introducing a

Apart from the arity problem the lub Int|Str works for the first

        +--------- <: ---+
        |                |
   :(Int|Str,Int)  :> :(Int,Int)
              |              |
              +--- <: -------+

        +---- <: ---+
        |           |
   :(Int|Str) :> :(Str)

Regards, TSa.

Reply via email to