1998-12-28-13:42:09 Paul D. Robertson:
> This doesn't address a plethora of potential CGI problems though.  Also, 
> there's the question of CGI auditing.  In a TCB-based environment, you 
> simply need to give the developer access to certain MAC levels and 
> let them do their job.  It doesn't matter if they slip up, if they're 
> intentionally malicious, if their compiler was trojaned, or if someone 
> breaks their machine, you're assured that the only thing the CGI can access is
> whatever files it needs to do its job.  No worries about exec() calls, shells,
> newly installed software, setXid() processes/files, chroot games, buffer 
> overflows in libraries, host.equiv, .rhosts, kernel modules, /dev entries...

Well --- sure. Those don't strike me as hard to solve well enough, without a
trusted OS.

The Hard problem is assuring that the CGI doesn't damage the most valuable
data on the same machine, which is to say the data it must be able to
manipulate to do its job. That requires auditing, no way around it.

What I hear you saying is that trusted OSes are good for sandboxing. Sure. So
is separate hardware, and at the level of security-critical CGIs it's an easy
fix --- and one I'm more likely to trust than an OS used by a few people here
and there, who don't have access to its source.

As for the trustworthiness of the evaluation process, for whatever its worth,
military machines attached to the internet are routinely burgled, and the
standard "oh this is no problem" response from the press flacks is that
_important_ machines cannot be attached to the internet; I am inclined to read
here that the evaluation process is expected to produce OSes that cannot be
configured to withstand the grade of attack that will be mounted from the
internet.

In other words, the boys setting gov't security practices seem to count on
separate-hardware for their sandboxing.

-Bennett
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]

Reply via email to