Sorry for delaying my response to this thread - I wanted to close out the
instance resolution discussion first rather than overload my brain.

On Tue, Jul 23, 2013 at 2:14 PM, David Jeske <[email protected]> wrote:

> BK> Jonathan you mentioned the cost of unsafe as a tax on  the user but
> there are
> BK>  many apps where the introduction of a 3rd party lib/dll is a
> significant problem
> BK> possibly as significant as memory / type safety .  Look at unsafe
> language
> BK> browsers that work well but grow unstable with add ins.
>
> JS> Sure, but the main source of that instability is the lack of safety
> and isolation of the add-in.
>
> @Shap - I've been silently with you on on this entire fork of the
> discussion regarding importance of modular compilation, late binding
> (dll-ish), and matters of implementation selection in the large..
>
> I want to value-add that the above statement reaches far-beyond
> type-safety and into the isolation and containment provided by operating
> system kernels.
>
Yes. Though one of the research issues we were trying to explore in BitC -
and we stumbled badly - was typing isolation and containment.

The E language, as one example, has this interesting idea that all globals
must be deep-constant. For various reasons I didn't want to go that far in
BitC, but it did seem plausible that we could infer "deep constantness" and
then identify from the module type whether a module was isolation-safe in
the E sense. We failed at that, but only because I got my notion of
"constness" wrong. I still need to figure out the *right* notion, but I
think it can be done.

The other case that I'd really like to be able to type check is confined
procedures - or at least the simple case of them. If I invoke a factory
procedure, I'd like to know that nothing escapes from that invocation *
except* into the yield of the factory, and that the yield of the factory
initially speaks only to me. My sense is that this, again, can be handled
by suitable type checking.

> I say this because, despite many promises, we've yet to see a fully
> software-based isolation model that's worked out well in practice. I agree
> that type-safety (really memory safety) can close a huge surface area for
> bugs and security-attack-vectors into processes.
>
Yes. And that's a sufficient reason to migrate, but I think there is a
stronger reason hiding beneath this. If we cannot ensure the safety of
application memory references, there is no real possibility of
software-based isolation.

You might respond - and in some cases I might agree - that address spaces
are a better way to go. The problem with that is that address space
separation incurs data copy costs, and in some cases those copy costs are
pretty big. It's the second reason (the first being laziness) that we build
monolithic applications.

The thing that's hard to isolate without address space boundaries is CPU
time.


> However, IMO the lack of sufficient isolation boundaries and abstractions
> within operating system kernels are an equal or bigger issue than those
> within the processes and runtimes themselves.
>
Hard to say which is the bigger issue, but nestable isolation that doesn't
require special privilege to set up is clearly necessary. The problem is
that it isn't sufficient. There are subsystem boundaries across which data
has to move, and there are various forms of attack possible at those
boundaries. Typing can get us past a lot of those.

IMO the big missing link at the operating system level is to stop requiring
> "administrator configuration" to provide sub-component isolation... (more
> like Plan9/VSTa)
>

Or KeyKOS/EROS/Coyotos, which inspired the VSTa ideas (and Andrew, by the
way, lives not that far away these days).


> Independent of the set of mechanisms used (capabilities, domain-hierarchy,
> virtualization), any application should be able to create an efficient
> containment sandbox  --- which is **indistinguishable*** from a
> non-sandboxed environment to the contained component. This would stop holes
> in the runtimes and end-software from resulting in gaping-holes in system
> security.
>

Indistinguishable is very hard to achieve in practice. You can get 99
44/100% of the way there pretty easily. That last 0.56% has a very high
perf cost that probably isn't worth it. I prefer to take the view that all
applications should run sandboxed all the time, at which point the ability
to detect the difference becomes moot.


> Android is arguably the "most isolated" popular mainstream operating
> system, and they are providing their isolation not through purpose-built
> mechanisms, but by re-purposing the UNIX user-id protections to provide
> cross-app protection domains. This means apps are segregated from each
> other, but they are not free to create their own containment domains.
>
Android is an interesting study in how far you can re-purpose the UID
mechanism. The answer is "a lot further than we thought", but also "and not
far enough". They also made a fatal mistake admitting an unsafe native code
SDK.


> I should become more educated about Capsicrum and the new-ish MacOS/iOS
> application isolation. As capability systems, I don't think they are
> capable of arbitrary end-application sub-containment -- though if I'm wrong
> about this I would love to be corrected.
>
Capability systems, so far as I know, are the *only* systems that are
capable of arbitrary end-application sub-containment.


Jonathan
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to