Larry Wall wrote:
On Tue, Oct 25, 2005 at 10:25:48PM -0600, Luke Palmer wrote:
: Yeah, I didn't really follow his argument on that one.  I, too, think
: that the one() junction in general is silly, especially for types.

Well, I think it's silly too.  I'm just trying to see if we need to
reserve the syntax in case someone figures out a way to make it
unsilly in some dialect.

So, here are three non-trivial types that make excelent use of the
one() junction:

  1) The parse tree of a packrat parser
  2) a n-ary, recursive decision tree
  3) a multi method

I wouldn't call these silly :)

: When you say Dog^Cat, you're saying "I want something that either
: conforms to the Dog interface or the Cat interface, but *definitely
: not both*!"  Why the heck would you care about that?  Does there
: really arise a situation in which your code will be erroneous when the
: variable conforms to both interfaces?

Hmm, I think MMD in a certain way picks one of many targets out of
a collection of applicable ones. The whole point of the excluded middle
is *avoiding ambiguity* up-front.

The question is at what point on the time line of a concrete program
instance one is wanting to resolve the ambiguity. Note that it is easy
to shift it *after* a successful run with a certain set of arguments!

: And in fact, its very existence defies another implicit principle of
: mine, that is, the "principle of partial definition":  Defining a new
: type or instance can only break a previously typechecking program by
: making it ambiguous.  The idea behind that is that at some time you
: may realize that oen of your types already obeys another type, and
: declare that it conforms to that interface.  But you don't go the
: other way around, undeclaring that an interface holds, without your
: program having been erroneous in the first place.  Declaring that a
: new interface holds (so long as it actually does) shouldn't break
: anything that was already correct.

I think the situation is a bit more complicated than that. In this
chain of arguments is a hidden assumption of asymmetry in the sense
that one interface was first and therefore the other has to adapt.
Thus putting two self-consistent systems together could break the
whole thing. Take e.g. the right versus left driving in Great Britain
and continental Europe. They are both self-consistent but mutually
exclusive. I can hardly imagine a shared lane approach for disambiguation.
This works for pedestrians but not cars ;)

Another example where 1 + 1 = 1 is two laola waves in a stadion
running in opposite directions. For the individual standing up
where the waves permeate each other it makes no difference, but
the people outside that area get up and down in a higher frequent
pattern than for a single wave. That is they stand up because
1 + 0 = 0 + 1 = 1 for waves from the left and right.

And that's basically what we decided in Portland when we went to
set types rather than junctional types.  And that's why I'm kind of
pushing for a natural bundling via juxtaposition that can be viewed
as ANDing the constraints:

    :(Any Dog Cat Fish ¢T $dc)

That says much the same thing as

    :($ where {
            .does(Any) and
            .does(Dog) and
            .does(Cat) and
            .does(Fish) and
            .does(Class) and ¢T := $dc.class and
            .does(Scalar) and $dc := $_;

This is a very nice way to avoid explicit infix & syntactically,
which is a great achievement in its own right. BTW, does a sub
name in there count as a type constraint? Or are only package
kinds applicable? I mean the ones that would get a :: sigil if
the sigil were required.

And basically, if | can be used to construct type sets, it ends up
meaning exactly the same thing:

    :(Any|Dog|Cat|Fish ¢T $dc)

But maybe that just means we don't need it.

I thought, theory theory is about lifting these issues onto a higher
level by introducing an explicit calculus with predicates on type
variables. And then letting the type system point out (self-)
inconsistencies and more important incompatibilities between two
self-consistent modules forced to play together in an importing
module or program.

I further thought that junctions are the value or runtime shadow
of the theory level. Just as the grammar engine is always there,
the type system is also hanging out for occassional runtime
interference---that is e.g. throwing exceptions or on the fly
theory instanciations or some such. In both cases compile time is
in a certain way the extreme case of "interference" of these two
systems producing the code of the program :)

The following puzzle might serve to illustrate the point I try
to make. There is a given square of side length a. At the top left
and the bottom right corners a line of length b is attached in the
direction of the left and right sides of the square. The endpoints
of these extensions are connected with the respective opposite corners,
thus forming a parallelogram. Here's an ASCII art of the resulting picture:

 b | \
   |\  \    |
   | \  \   |
 a |  \  \  |
   |   \  \ |
   |    \  \|
          \ | b

Now the following slightly unprecise question is asked: "What is the
combined area of the two shapes?"

The perceptionistic psychologist uses this question to test the ability
of a contestant to think outside the prescribed idea of an extented square
and switch to a view of two triangles that when moved vertically until their
hypotenuses align give the resulting area T = a*(a+b) = a**2 + a*b easily.
Usually the problem is actually stated with a := a and b := a+b to give the
even simpler formular T = a*b, the area of a rectangle.

The mathematically inclined guy I happen to be, sees an inconsistency in
the question about how to handle the overlapping parallelogram inside
the square. In the expected disambiguation from above the construction
is interpreted in a somewhat convoluted way to describe two *independent*
right angled triangles with area T/2 = 0.5*a*(a+b) each.

But you could also go for the area of the oulined 6 corner polygon!
With the resulting area P = a**2 + b*x where x = (a*b)/(a+b) is the
length from the corner to the point where the diagonals intersect the
sides of the square. Eliminating x gives P = a**2 + a/(a+b) * b**2.

Interestingly the two approaches agree in the two cases b = 0 and
b = Inf. Both fail to give reasonable results for negative b. The two
triangle interpretation in that case should switch to a two trapezoid
interpretation while the polygon interpretation should give a constant
area P = a**2 when -a <= b < 0. But argueably negative b is out of spec.
But then again the programmer might have choosen a signed implementation
type and leave the door for negative input of b wide open...

So, what did I want to tell with this little narrative? Well, I would
expect the type system to simply come up with the request: "how should
the overlap be handled" automatically, if it got the abstract problem
stated in a machine readable form. Here this will be difficult to imagine,
but for more programming related settings this should be the case. And
I strongly believe that such a functionality helps a lot in large scale
programming! And of course in writing specifications before implementation
starts :)

Sorry, if this is too off-topic.
$TSa.greeting := "HaloO"; # mind the echo!

Reply via email to