In all of the following, one thing to keep in mind. There
are no "wrong questions" or "right answers". Only "different
perspectives". Looking at things throught other people's
perspectives is sometimes illuminating - let me share my
perspective with you.
[EMAIL PROTECTED] (Randolph J. Herber, CD/DCD/SPG, x2966) writes:
+> Another thing I do not understand is why AFS did not implement full Unix file
+> semantics and instead implemented ``ACL''s.
This is really an "elementary" part of AFS - they *had* to do this
to accomplish their goals of working in large environments.
Unix file permissions were clearly designed for the use of small to medium
groups that do not need very sophisicated file protection schemes, and as
extended by Berkeley ("setgroups") it requires the agency of some central
authority to manage a file "/etc/groups" to control what groups people belong to.
Unix permissions are clearly not well suited for the use of
a very large scattered organization - you may have many workgroups
who will want very fine grained control over who can access
a given file - say, "everybody in the workstation systems group
may read stuff in this directory, except for Joe who is an asshole."
In a large organization, you may end up with hundreds, or even
thousands, of such "needs" for file protection - this information
clearly ought to be associated with the file or directory, and
controlled by the user or group who "owns" that storage.
Storing it anywhere else, such as /etc/group is ugly at best,
and unmanageable at worst.
In our organization, the University of Michigan, many people are used
to, and make full use of, the file permissions model of MTS. MTS,
somewhat like AFS, makes it possible to permit files to groups or
individual users, although MTS groups are not as flexible as AFS.
Their question with IFS will not be "why do you have ACL's" but rather
"why do ACL's apply to directories not files?" Of course, as you
might expect, that's yet another DFS feature. If we made
plain Unix file permissions available out there, there'd be
such a hue & cry...!
[EMAIL PROTECTED] (Randolph J. Herber, CD/DCD/SPG, x2966) goes on to write:
+> When I can not run set-uid and set-gid programs in the form they
+> were developed, I feel something is broke.
+> When I can not run programs without making them also readable,
+> I feel something is broke.
In a large networked world, you need to assume that people can break
in and compromise machines, or masquerade as otherwise non-existant
new clients. Remember, you are usually talking about many hundreds
of machines scattered over a wide area - it is not feasible
to post armed guards over every machine and connect them with pressurized
network connections patrolled by guard dogs. Yet, with NFS,
that is precisely what you would need to do, to run a secure
system over the area of, say, Ann Arbor. Anyone who can type L1-A can
"read" a program from NFS. Anyone who can alter their UID or GID,
or write an RPC interface to NFS, can get around NFS file permissions.
With AFS, file permission is built on top of kerberos style
protection technology. You have access to something if you
know the key. There's no way the file server can know if
you are really the kernel trying to load a program, or a user
trying to "cp" the file - so why pretend "execute only" permission
exists when the concept is in fact meaningless in a networked
world? Basically, the concept of being "SUID" just does not
make sense in a distributed environment - you need to remodel
such things into a "server/client" type paradigm, where the
server runs in a trusted secure environment and has keys to do its thing,
and clients run in a non-trusted unsecure environment and could be
anything - why not define an open protocol & invite people to write
their own improved clients?
+> I feel that the fact that ``quite a bit of user retraining''
+> is required shows that something is broke.
Um, - well, the computer industry was the wrong thing to go into,
if you want a stable non-changing environment. The computer
world was very different 10 years ago (Remember Codata?
Wicat? DEC Vax? VMS? Multics?) I expect it will be very
different 10 years from now. I, for one, welcome such change.
For me, the neat thing about Unix is how successful it has
been at evolving with the technology and taking advantage of it.
"MVS" is a good example of what happens when you work
hard to avoid the need for "user retraining". I wonder just how
many card decks got turned into MVS files and are still used today?
None of this means that AFS does it best, or that you shouldn't
scream bloody murder when AFS doesn't do something the same
as Unix. It just means you should first ask "Why isn't this
done the same way" and "How could it be done to better
preserve the Unix semanantics"? (Perhaps followed by "did
DFS do it that better way" and "when will DFS be out? :-))
-Marcus Watts
UM ITD RS Umich Systems Group