> From: Bill Cattey <[EMAIL PROTECTED]>
.....
> It is unfortunate that you do things this way.
> With MIT networking the dorms, and serving its users with AFS, one can
> expect that PC and Mac users at MIT will be interested in AFS access.
>
> Unfortunately, a process per PC would not scale in our environment.
NFS itself doesn't scale well -- so it would be unreasonable
to expect an NFS server to service thousands of machines
regardless of whether its files live in AFS or UFS. Even
if the protocol would support it, it's not clear to me
that the AFS cache will scale well either. If one had
only a small number of user level NFS client processes
shared between clients, another problem crops up; which is
that system calls are synchronous. So, while the process
is blocked in the kernel trying to get user A's files,
user B's NFS request (even for a file already in the cache)
will hang. A user who happened to sniff at files in /afs could
cause especially interesting results.
For Macintosh usage, AFP has the same basic problems as NFS.
(Except it's worse because the finder's pretty graphical
interface guarantees users will generate lots of filesystem
requests.) So you're pretty much stuck with either
(a) trying to put the AFS cache manager into these types
of machines [and it's only now that *new* PC's & Mac's are getting
big enough to make this a reasonable choice, ] or (b)
distributing intermediate servers, and thinking of them
as just more networking glunk.
It seems to me that NFS has a more basic security flaw;
in most (all?) implementations, authentication only happens at
login time. That means it should be possible for the bad guy,
especially one that can sniff on the network, to high-jack
an existing connection. That's difficult to fix without
source to both the client & server sides, but one might
at least ask that the initial authentication exchange
not send passwords in the clear.
-Marcus