Sorry it has taken me so long to follow up on this--the last couple of
days have been busy.

Doug wrote:
Excerpts from transarc.external.info-afs: 22-Feb-93 AFS Partition Size
Limit an.. "Doug Engert"@anl.gov (1946)

> about problems I would have with thrashing i.e. having the wrong optical
> disk mounted which could cause network problems while it got mounted.
> The Andataco cache disk in front of the optical disk may
> eliminate much of this contention.


Depending on the algorithms in use, the cache disk might help
considerably.  Or it might not, depending on your environment.
For example, if multiple clients are sharing the same file at slightly
different times (but not very different), then a simple LRU mechanism
would be helpful.  On the other hand, for a single client, a simple LRU
managed cache disk in the jukebox would effectively extend the client's
cache.  (The IFS project at UofMichigan discovered some things about
secondary caches and AFS, and they weren't terribly encouraging.   I
think that the file access patterns are not uniformly predictable, in
fact, they may be chaotic.  Existing empirical studies merely show that
simple cache management strategies are effective over a limited range.) 
If the cache disk uses aggressive pre-fetching (when a platter is
inserted into the drive, grab as much stuff off it as you can until you
need a different platter--this could be a big win for jukeboxes) all the
equations change.   So I recommend thinking carefully about the
algorithms used by AFS and the jukebox and about access patterns in your
environment.  I know a little bit about the former and very little about
the latter two.  If you can specify the latter two, I think that people
on this list might have some more specific advice.

Re sizes of partitions, volumes, and files

I suggested that making AFS handle partitions > 2 GB was very difficult.
 My reference to "section 2 routines" did indeed refer to system calls
such as 
lseek and read (the vnodeop for read takes a signed int offset, and all
the vendors' Unix-clones that I know of use a signed int file pointer in
their iobuf struct.).  I was thinking of things which read raw
partitions as if they were files, such as fsck, the salvager, and maybe
some of the volume operations.   On reflection, though, I don't think
that volume operations are going to present problems.  Volume size
probably should be limited to 2 GB in order to be able to vos
dump/restore to/from files, and moving a volume larger than 2 GB might
present some problems.  If you have a couple of 1+ GB disks and an
RS/6000, you can probably make a large AIX volume called /vicepx and see
what happens.  I don't have ready access to that kind of hardware, and
hunting through the source code for references to signed ints would take
days.

Excerpts from transarc.external.info-afs: 22-Feb-93 AFS and Optical
Disks "Doug Engert"@anl.gov (2040)

>   Does anyone see a problem with using a file system implemented at the
VFS level for AFS partitions?

Yes, the AFS file server operates directly on UFS inodes -- it doesn't
go through the VFS, it bypasses it.  So this seems like a no-go for now.
 

I am personally interested in finding some solution for mass-storage in
AFS, so if you have specific requirements or just a vague interest in
AFS and  archival/migration/mass-storage/robotics, drop me a line at
[EMAIL PROTECTED]

Lyle.

Reply via email to