On Fri, Feb 19, 2021 at 8:23 AM Olaf Buddenhagen <[email protected]>
wrote:

> (BTW, I didn't get the desired clarity: but perhaps you could chime in?
> Is there a good generic term for a capability referencing the receive
> end of an IPC port, such as the receive right in Mach?...)
>

Not that I know of. One of the critical notions in capabilities is that the
capability you wield names the object you manipulate. If the receive port
can be transferred, this intuition is violated. In consequence, few
capability-bases systems have implemented receive ports.

No member of the KeyKOS family implemented such a notion. Coyotos comes
closest. "Entry" capabilities actually point to Endpoint objects, which in
turn contain a Process capability to the implementing process. A scheduler
activation is performed within this process. This is comparable to a
receive port capability because the process capability within the Endpoint
object can be updated.

I went back and forth for a long time about how multiple processes might
wait on a common receive port. The problem with this is that the objects
implemented by these processes tend to have state, so if successive
invocations can go to any of the participant processes you end up in a
multi-threading shared memory regime anyway. When this takes the invocation
across a CPU to CPU boundary, the cache coherency costs can be higher than
the total invocation cost. We decided it would be better to use scheduler
activations and an event-driven approach, and let the receiving process
make its own decisions about threading.

This also has the advantage that all of the "pointers" (the object
references) point from the invoker to the invokee. That turns out to be
essential if you want to implement transparent orthogonal persistence. It
rules out receive port capabilities.


> > I also believe that there may have been some misunderstanding about L4,
> > because it was *never* going to be possible to adopt "the L4 kernel". At
> > best, it would be possible to adopt the L4 kernel and then structure some
> > core resource management services around it that were designed to support
> > the requirements of the Hurd. It was never clear to me if that was how
> the
> > L4 option was approached.
>
> I do believe that was the plan? The issue was that the cost of the
> user-space services that would be needed to properly run a Hurd-like
> architecture on top of the original L4 (without kernel capability
> support) turned out to be prohibitive...
>

I can believe that. The introduction of kernel capability support was an
outcome of the "L4 Capability Summit" meeting in Dresden in 2004 (if I
recall correctly), but it took several years for the results to show up in
actual implementations. It wasn't a small architectural change.


> In the end, Neal's experimental "Viengoos" kernel used an approach where
> the receiver (not the kernel) provides a receive buffer: but the receive
> operation can nevertheless happen asynchronously from the receiver's
> application threads. (Using some sort of activation-based mechanism --
> though I'm not sure about the details.)
>

I have not looked at Viengoos, but this sounds functionally similar to what
Coyotos does. In Coyotos, the receiving process designates a scheduler
activation block that says where the incoming data should go.


Jonathan

Reply via email to