On Thu, 3 Nov 1994 14:06:40 GMT, Simon Cooke said: > > Let me clarify that question. Why would calculating a random access on > > this system depend on caching
> Just for speed -- if you can keep the FAT in memory, then it's a lot > faster But CP/M doesn't have a FAT - hence my question. > > > > I'm definitely not in favour of a FAT-based system. > > > > > Yeah, but how else are we supposed to thread the files? > > > > What is "threading the files"? > > Chaining the sectors together to make up the actual file itself Simple. Associate with each individual file the list of disk blocks which it occupies. Hence the concept of a Unix i-node. The information doesn't have to be in the i-node; it can be in the directory or in a header at the start of the file. Now, for a 20K long file you only have to cache 20 bytes (or possibly 40 bytes) of data. CP/M stores the list of blocks in the directory entry. Unless you already knew this and were just humouring me, you are probably asking what happens if the file is so long that the list of blocks doesn't fit in the directory (or the i-node, or the file header). CP/M cures this in a not-spectacularly-satisfactory way by allocating another directory entry to the file. Unix is more clever; the last block number in the list does not point to a block of the file - it points to a block of block numbers. There's enough space there to list all the blocks in a 512K file. Unix actually goes two steps further than this, though you would only need one on any reasonable filing system: after the block of block numbers it has a block of blocks of block numbers. This is sufficient for files up to 256M in length. For completeness' sake, what about the free block list? Store a pointer to the first free block somewhere, and then write at the start of each free block the number of the next free block. Since the space isn't being used for anything else and you are not likely to want to do random access on the free list, this scheme works well. imc

