Lau wrote

> Unix (et alia) actually also does (did?) do software read-ahead. It
> assumed that when you read from a file, you would be wanting to read
> more of the same, so it initiated the read of the next sector (or
> whatever) as soon as it gave you the current one.
>
We tried this on the first QDOS hard disk drives and it actually slowed
things down. We tried drives from several suppliers with the same results.
Unlike floppy disks (and microdrives) where the data is transferred to the
host as it is read from the medium and after reading a sector the checksum
is checked to see if the data is correct, modern hard disk reads are so
subect to single bit errors that they depend on error correction to give the
right data.

The data is read into an internal buffer and when the whole sector has been
read the error(s) are corrected and the sector sent to the host. While the
last sector read is being transferred to the host, the next sector is
passing under the heads.

Unfortunately, none of the controllers were intelligent enough to carry on
reading the data into another buffer after the last sector requested had
been read, so reading ahead merely tied up the disk controller for a
complete revolution of the disk. At the best this was rarely better than
just reading sectors as you needed them, at the worst it could nearly double
access times.

In fact the controllers were optimised for stupid systems with a small
number of large buffers (up to 8kbytes for a single read) inadequate for
handling more than one file at a time. As a result the speed on the MSDOS
disk benchmarks depended almost entirely on the capacity of the disk
controller to maintain copies of data already read to avoid having to
re-read the data (that had been lost by MSDOS) from the disk.

The QDOS policy was to maximise the number of small buffers. This is not too
bad an idea on small machines but it runs into three problems

- current hard disks are even more highly optimised to deal with Windows
file system strategies
- no limit on the number of buffers was built in and a simple linear search
was used resulting in excessive buffer searching costs with large memory
systems - various fixes have been implemented for this
- the HARDWARE has to support asynchronous I/O (this could just be
appropriate interupts to allow PIO (see later message by Marcel) or it
could be interruupts + DMA)

The Atari was probably the last system on which a native QDOS / SMSQ
system was implemented which had the asynchronous file I/O hardware
required (QPC is not native)
As Atari themselves discovered when they tried to put a multitasking system
on the STE / TT series, this hardware was fundamentally flawed (the
technical director at Atari responsible for software was the same person
that took over my job at Sinclair we still communicated from time to time).

Tony Tebby




Reply via email to