> 3. SideCar/ S/Ident technology -- (from, respectively, Cornell's Project
> Mandarin and Bob Morgan at Stanford). Both of these approaches deal only
> with authentication; both work with standard browsers. Here's how they
> work:
>
> the user requests a URL, the server detects that the requested document,
> etc is access controlled, the server opens a TCP conection to the (sidecar,
> s/ident) process on the requesting client, the client returns
> "authentication info" (eg K4 ticket, K5 ticket, PK stuff, etc), the server
> does standard authentication processing on the info returned by the client.
>
> If authenticaiton succeeds, the server then does "site specific
> authorization processing" (hopefully, leveraging what you already have).
>
> There are pros and cons to this approach: it works with standard browsers,
> it leverages existing infrastructure, it would work with decentralized web
> servers run by departments, etc; however, it requires new code in the
> server to implement this functionality, new code in all the clients, and
> may be susceptible to some forms of attack.
>
> 4. Gradient's Web Crusader -- works with standard browsers. It puts a proxy
> server in each desktop. For normal URL's, the proxy does the normal,
> expected thing. When the proxy recognizes a URL pointing into the DFS
> filespace, it uses DCE RPC to connect to a special webserver. The RPC
> includes DCE authentication info about the requestor; the server
> authenticates, and supports DCE access control mechanisms on the web space.
> This approach requires a functioning DCE cell, and the DCE runtime in every
> desktop.
>
> This approach looks very interesting, but is completely irrelevant as a
> short term solution. (This month, we're deploying DCE -- is everybody
> ready?).
We've been working on a solution (Shelob) that's similar to what you
describe Web Crusader as doing, though we haven't seen that. The basic
idea is to put a proxy on each client machine, which talks to the server
over an encrypted, Kerberos-authenticated connection. The web browser
doesn't even need to be configured to use a proxy; instead, we give users
a URL like http://localhost:shelob/real-server/XXX. Thus, the proxy is
used only for connections requiring authentication. The proxy speaks
a custom protocol to the server over an encrypted connection. In the
current implementation, the server is a separate program that talks to
a slightly-modified version of Apache running on the same machine. In this
configuration, the same server can serve both authenticated and
unauthenticated requests, and you get all the features Apache normally
has. Files can be protected based on the Kerberos principal attempting
access, using an appropriate .htaccess file; for an example, take a
look at /afs/cs.cmu.edu/user/jhutz/www/test/.htaccess. In addition,
CGI scripts get passed an environment variable contianing the Kerberos
principal, if any, that the remote user is authenticated as.
The system is still under development, but all the basic features are
there, and we expect to deploy it relatively soon. The remaining work
consists mainly of some enhancements to the proxy server to better
protect against network spoffing attacks, plus ports to MacOS, Windows,
and a few more UNIX systems. We currently have versions of the proxy for
SunOS 4.1.x, SunOS 5.x, Ultrix 4.3, and NetBSD 1.0 or 1.1. We intend
to also do ports for Irix 5.3, AIX 3.x, DEC OSF/1 3.2c, and HP-UX 9.0x,
plus new versions of all of those systems as they come along. I would
also not be surprised to see a Linux port, though that is not currently
officially part of the project.
-- Jeffrey T. Hutzelman (N3NHS) <[EMAIL PROTECTED]>
Systems Programmer, CMU SCS Research Facility
Please send requests and problem reports to [EMAIL PROTECTED]