On 27/10/05, Jonathan S. Shapiro <[EMAIL PROTECTED]> wrote: > On Thu, 2005-10-27 at 00:35 +0100, Brian Brunswick wrote: > > called "endogenous" verification, in EROS and others, and potentially > > Close, but not quite. It is possible to compute whether a newly > instantiated subsystem will be confined at the time of instantiation, > and it is possible for the requester (the program requesting that the > subsystem be instantiated) to learn whether this will be so.
So is this confinement in some sense relative to assumptions about additional access? The program could then give the subsystem additional capabilities and change that? But nothing else can because it hasn't the capabilities to the new subsystem? (/Can/ capabilties be sent over IPC channels in EROS, or only at creation time?) > The space bank authentication is used EVERY TIME we create a process. It > is utterly impossible to build a program that makes any guarantees about > behavior at all if it does not know that it has exclusive access to its > own storage. In the discussion, I described the protection of encryption This is protection even against the thing requesting the creator of the subsystem - only the instantiator has the access, thereby protecting the new thing if is has special capabilities? BTW, this kind of thing seems awfully much like priviledge escalation to me - the much derided setuid applications! But perhaps not.... We might think of an instantiator with special priviledges as nothing more than a server that spawns a new thread to serve each request....the extra threads allow additional reasoning about isolation of the requests:-) > > The confinement check is not used as much. The shell does this check all > the time, but most other programs do not bother except in special > circumstances, such as execution of untrusted code. This is true partly > because the test is transitive, so the check performed by the shell is > sufficient to cover an entire program. Presumably it can't apply to any network accessing program at all. And confinement is relative - we can't know what kinds of secrets are actually present in the (small) set of files being handed to a program.... But then if there is no outward channel.... I guess it might come down to covert communication channels then. Real time issues! Once > confinement becomes a design consideration, programmers begin to ask how > to design their programs in ways that permit them to be confined. This Minimum privs I suppose. Lots of good effects of course > > What would it be worth to you to be able to run an email agent that ran > code provided by some arbitrary, hostile third party, and **never have > to worry** about what it did to your system? My life's ambition:-) Well one of them anyway. See elsewhere. > > > We might think of it as akin to so called "introspection" that is > > possible in the run-time systems of certain languages (java), and used > > to support dynamic loading and configuration of system components. > > Perhaps, except that the constructor check is performed statically, and > introspection is generally very problematic if security is a design > goal. > Eh? I'm confused now. I thought it was dynamic. Or is it a verification of the code of the constructor? I guess I'm trying to become convinced of the utility of such a check as a dynamic thing, rather than just as some (automated) reasoning about code correctness. (like a statically typed language.) Duh... Ogg say automated reasoning /good/! What about verification of other sorts of things, space and time constraints, result invariants? Perhaps these are more naturally checked dynamically though. -- [EMAIL PROTECTED] _______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
