These are some notes I dashed off before leaving, so they might be out
of date now.  Take them for what they're worth.  To these, I would add:
0. AFS needs to pick up a bunch of small-time enthusiasts and
developers, not just the people who are close to big deployments.  If
that happens, the current CellServDB mechanism is going to hurt.  Lots
of new little cells coming and going will be a management nightmare for
everyone.  Using DNS never happened originally because "that's a DFS
feature", and then because the growth rate of the public "global name
space" dropped off.  Also the "ls /afs" problem could become serious.  A
number of solutions have been proposed (and some attempted) over the
years.  Any one of them would be better than nothing.

------------
1. Better warm cache performance
    (a) page flip accesses to local cache FS
    (b) exploit locality of reference to cached files rather than
destroying it (by dividing everything into little chunks).  Current
behavior is still just a quick-and-dirty hack to be able to support
files larger than the cache.  Still needs a _rational_  distinction
between a unit of network transfer and a unit of local cache usage and
not conflate those with the local storage mechanism.  DFS guys had a
good implementation under way  -- talk to them.

2. Better write performance.  Do this with "delayed commit" a
la NFS3.

3. Some way to take advantage of growing PC disks.  AFS was designed
for a workstation with a <100MB disk.  Disks are now up to 100
times larger than that, and nobody can stand to leave that space empty.

4. A better way to punch through firewalls.

5. Better security
(a) *good* cross-cell access control
(b)  unique server keys

You can get 2&3 by running lots of servers -- IOW, more of a "peer to
peer" model.  cf XFS.  You would choose where volumes go as a matter of
policy -- perhaps managed automatically, perhaps not.  I would prefer
that my home volume live under my desk, so I would not usually be
accessing it over the network.  Ideally, it would move to be closest to
the client that uses it most often but that's a policy matter that could
be layered on top of the essential functionality, which is:
A. small servers (and)
B. a "fast path" for access to locally-served data.  This is actually
not too hard.  The client has to get a callback on the data, and be told
what the inode number is, and then it can just iopen the thing directly,
fault the pages into UFS and flip them into AFS (or copy them, if you
can't figure out how to rename a page).  You need to synchronize it with
the fileserver process, of course.  Almost certainly worth doing for any
files >4KB.

Once you do this, and every tom dick and harry have their own servers,
the trust model needs revisiting.  But that shouldn't stop you from
doing the enabling work -- once this takes off, the trust model will
follow.




Reply via email to