HaloO,

Yuval Kogman wrote:
The try/catch mechanism is not like the haskell way, since it is
purposefully ad-hoc. It serves to fix a case by case basis of out
of bounds values. Haskell forbids out of bound values, but in most
programming languages we have them to make things simpler for the
maintenance programmer.

My view here is that the parameters in the code pick a point in
the range from free on the one end and bound at the other end.
The unifying concept is constraints. So I see the following boundary
equalities:

  unconstraint = free  # 0.0
  contstraint          # 0.0^..^1.0
  fully constraint

as the black and white ends with shades of gray in the middle.
And it is the type system that guaranties the availability
of the required information e.g. in $!. In that sense a sub
with a CATCH block is a different type than one without. This
difference is taking into account when dispatching exceptions.


Reentrancy is an implementation detail best left unmentioned.

Uh ohh, in an imperative language with shared state outside
the unshared state of multiple invocations of one sub the reentrance
proplem is just there. Interestingly it is easily unifyable with
shared data.


Assume that every bit of code you can run in perl 6 is first class
code - it gets safety, calls, control flow, exceptions, and so
forth.

Just to synchronize our understanding, I see the following
equivalences from the data and code domains

    data    code

    class = sub
 instance = invocation

To illustrate my view consider

class foo
{
   has ...
   method ...
}

and match with

sub foo
{
   has ... # my named parameters defined in body proposal

   BEGIN ...
   CATCH ...

   label:
}

What I want to say is that calling a sub means creating an
instance of the class that describes---or *constrains*---
the potential invocations. If such an invocation is left
lying in memory unfinished we have a continuation. How concurrent
these continuations are stepped in real time with respect to their
respective inner causal logic is irrelevant to the concept.
But *causality* is important!

The view I believe Yuval is harboring is the one examplified
in movies like The Matrix or The 13th Floor and that underlies
the holodeck of the Enterprise: you can leave the intrinsic
causality of the running program and inspect it. Usually that
is called debugging. But this implies the programmer catches
a breakpoint exception or some such ;)

Exception handling is the programmatic automatisation of this
process. As such it works the better the closer it is in time
and context to the cause and the more information is preserved.
But we all know that a usefull program is lossy in that respect.
It re-uses finite resources during its execution. In an extreme
setting one could run a program *backwards* if all relevant
events were recorded!


Yes, even signals and exceptions.

The runtime is responsible for making these as fast as possible
without being unsafe.

Hmm, I would see the type system in that role. It has all the
information of the interested parties in a longjump. If it knows
there are no potential handlers



It can't be a method because it never returns to it's caller - it's

It beeing the CATCH block? Then I think it *is* in a certain
way a method with $! as it's invocant. HiHi and here a locality
metric for dispatch applies. BTW, how is the signature of a CATCH
block given? Simply

   CATCH SomeException {...}

or is inspection with cascaded when mandatory?


a continuation because it picks up where the exception was thrown,

I would say it is given information about. In a way an exception
handler is dispatched on the type of exception.

and returns not to the code which continued it, but to the code that
would have been returned to if there was no exception.

This is the thing that I see is hardly possible from a far away scope.
But fair enough for closely related code.


It is, IMHO a multi though. There is no reason that every
continuation cannot be a multi, because a continuation is just a
sub.

I don't know if there are method continuations - i guess there could
be, but there's no need to permutate all the options when the
options can compose just as well.

My view is that a (free) method type becomes a continuation
as follows:

  1) the invocant type is determined
  2) the dispatcher selects a matching target
  3) this method maker object (like a class for constructing data objects)
     is asked to create a not yet called invocation and bind it to the
     invocant at hand
  4) at that moment we have a not yet invoked sub instance, so
     plain subs just start here
  5) The signature is checked and bound in the environment of the calling
     scope, the callers return continuation is one of the args
  6) then this invocation is activated
  7a) a return uses the return continuation in such a way that
      the invocation is abandoned after the jump
   b) a yield keeps the continuation just like a constructed object
      and packages it as some kind of attachment to the return value.
      BTW, I think there are more data types that use this technique
      then one might think on first sight. Iteraters, junctions and
      lazy evaluation in general come to mind.

Control structures are wrapped around their block tighter
and more efficient then the above procedure. In particular
the binding steps 3 and 5 are subject of optimizations.


This is because the types of exceptions I would want to resume are
ones that have a distinct cause that can be mined from the exception
object, and which my code can unambiguously fix without breaking the
encapsulation of the code that raised the exception.

Agreed. I tried to express the same above with my words.
The only thing that is a bit underspecced right now is
what exactly is lost in the process and what is not.
My guiding theme again is the type system where you leave
information about the things you need to be preserved to
handle unusual cicumstances gracefully---note: *not*
successfully, which would contradict the concept of exceptions!
--
$TSa.greeting := "HaloO"; # mind the echo!

Reply via email to