>
> A. Haxby writes:
> 1./ We would like to improve our security by using secure-id cards for
> one-time passwords. Has anyone managed to use secure-id cards with the
> AFS version of kerberos? Is anyone using secure-id cards at all?
I looked *very* briefly into this. I gave up when I found that
the secure-id stuff is sort of "properietary", and it looked like
it would be difficult to get useful information on it. Here's
the catch: there needs to be a machine that that can (1) predict
the secure-ID keys, and (2) somehow has access to AFS keys to
make tickets. In greater detail: it seems that as generally
implemented, the secure-ID system uses software that runs (at
least at the UofM) on an IBM mainframe that knows the secure-ID
algorithm and must also use some sort of table that contains per-card
seed data. I was discouraged enough at this point to give up efforts
to find out more details. If there were a workstation implementation
of the server side, this might be feasible, which brings us
to part 2. If the only goal is to have it authenticate to AFS, then
all that's needed is a service ticket for AFS. If you know the key of
afs.@CELL-NAME, you can easily make (or "forge") service tickets - and
any machine that is either a fileserver or a database server in fact
has the key for AFS readily accessible - that's what's stored in
/usr/afs/etc/KeyFile.
If you want to get a *general* kerberos tgt, one that can be used to
request multiple service tickets, that's harder. Basically, you need
to (somehow) have access to the kerberos database itself. There are
several ways you can go - you can attempt to get the kerberos tgt key
and forge tgt's. Or, you could fetch the user's key from the kerberos
database and use that to fetch a tgt. Both approaches are, of course,
fraught with peril. To fetch the user's key, one thing that might help
is that there is, in fact, an RPC that you can use to fetch keys. It
has to be from a connection on the local machine, and it has to be
authenticated with an admin key. *And* - kaserver must be compiled
with "GETKEY" defined - which is not the way transarc distributes it by
default. To fetch the tgs key, you'd need to learn enough about
the kerberos database to find it, and the logic would have to
include support that recognizes that kaserver changes it fairly often
(something like ?once a day?). On the bright side, since it's
only one particular key, there are many details such as the key
hashing algorithm used to link ka database recors that you won't
need to learn. One final detail that applies to both approaches;
MIT kerberos generally wants the IP address of the ticket to match
that of the client being used. So, that means the IP address in the
tgs must match that of the client the ticket is to be used on - NOT
that of the database server. That's not so bad if you're forging
the ticket, but if you're using the getkey approach, you'd want
to secure a connection between kaserver & the client based on
some secret ultimately authenticated and encrypted with Secure-ID,
and use that to hand the user's kerberos key back to the client
workstation, which would then ultimately retrieve the user's key.
If you're willing to accept some limitations, then you can
simplify this process. For instance, if you ONLY want users
to use secure-ID to authenticate, then set the user's kerberos
key to be something the daemon can easily calculate or lookup,
but the user can't, and the daemon or the authentication
software can then obtain the tgs & proceed from there.
>
> 2./ We have a user group who develop and run seismic data processing
> applications. They currently use NFS on a number of platforms including
> SP-2 and suffer from problems with NFS cross mounts, inconsistent name
> space and different passwords on different machines (no NIS or kerberos).
> They are considering using AFS (either in our existing cell or perhaps a
> seperate cell) but have a total of 500GB of data. Individual data sets
> can be up to 3GB and the applications read them sequentually.
>
> Does anyone have experience with very large AFS volumes? What servers and
> disk configurations do you use? I wondered about using a couple of SPARC
> 1000's with 256GB of raid disk on each as file servers. To reduce network
> i/o and improve performance I wondered if it would be possible to allow
> users to run really large jobs on the fileservers themselves? Is this
> sensible? Does anyone allow users to run jobs directly on AFS
> fileservers?
It would not be useful to run jobs on the fileservers.
The way AFS works, there is no direct path between client
programs and the disk. Therefore, there is no advantage.
There are however two important disadvantages: CPU used
by user jobs detracts from fileserver performance (translation:
your AFS fileserver will have really sucky performance)
and there is a vastly increased security risk if you
allow normal users access to the machines. That is
because all of the security of AFS depends on the
secrecy of the keys in /usr/afs/etc/KeyFile; and anyone
who can log into the fileservers has many more opportunities
to exploit holes and loopholes in the host OS to obtain
root and hence access to the KeyFile. I have heard
of one site that nevertheless allows normal users access
to their fileservers. I think they're nuts, but I gather
the local politics at that site would make doing things
"right" very difficult.
>
> 3./ This question is way off-topic for info-afs but subscribers to this
GNU software is good stuff. Since I come from a University,
though, I haven't any simple solution to offer you in terms of
dealing with corporate politics. I can however tell you
the two things that will interest a university most: money,
and the chance to investigate new technologies. That
doesn't necessarily translate directly into useful commercial
grade supported products, and even such "well-known" packages
as Kerberos itself, and BSD "Unix" are not of "commercial"
quality as distributed by the highly respected institutes
that originated these fine works of software. There are organizations
such as Cygnus and BSDI, Inc., that you should contact instead
if you want to use these products, and care about "commercial"
levels of functionality and support. For a core technology
such as kerberos, you should be somewhat paranoid. Use
transarc, or failing that, Cygnus. For other things, well, contact
your local colleges and see if you can arrive at a mutually satisfactory
arrangement. Different universities and colleges may have vastly
different skill levels and interests, and one may be just what you're
looking for. Don't do it, however, if your only interest is cheap
labour. You will be disappointed. Do it because "it's a good thing"
to offer educational opportunities, and because you might even
be able to hire the best & brightest of the lot.
>
> Do you make use of 3rd party products such as HP OpC or do you write/buy
> you own code for systems monitoring and administration?
Interesting sub-question. Different parts of the university
use a number of different monitoring solutions. There have
been a number of serious efforts to evaluate outside products
for use here, but none have met our needs. Most of these
products tend to be big on flash & ease of use, and have
serious limitations when applied to heterogeneous environments
with weird local requirements, and tend to be very expensive
for the functionality offered. That means all of the parts
of the university that I can think of that have monitoring systems
in place, use various homebrew systems. For IFS, we use
something called "bigbro" which was written by the College of
Engineering (CAEN). "bigbro" has absolutely *no* flash, but
is very flexible, and lends itself very well to having new
monitoring tests plugged into it. It is certainly not of
commercial grade, but it meets our functional requirements
better than any commercial software we've heard of, and we're
quite happy with it.
>
> If you are a University or small niche-market company, do large
> corporations make use of your skills? What can you provide?
Yup. Brain power. The University of Michigan has all sorts of
research partnerships, both small & large, with all sorts of
corporations, as well as the government.
-Marcus Watts
UM ITD PD&D Umich Systems Group