Hi Tom, Bernd, Michael et al,

I think int 25/26 etc. MUST indeed ensure BUFFERS cache CONSISTENCY,
but should NOT trigger ALLOCATION of cache items. See my explanations.

Consider this scenario (as it probably happens in between FDNPKG,DOSLFN, kernel)

someone writes normal data through the kernel.
The kernel has to allocate a new cluster, read the FAT table into the cache,
modifies it and writes back (but leaves it in the cache).

now DOSLFN wants to create a new directory entry, must allocate a new cluster
for the directory entry, reads the FAT from disk, modifies the FAT sector, and 
writes it
to the disk.

all fine so far, but the kernel still has the*unmodified*  sector in it's cache.

...and may do disk writes based on STALE BUFFERS cache data => Doom! :-o

I recommend the solution for INT25/26 to look at the top of the BUFFERS cache  
for the
same view of the (cached) disk as the kernel.
IMO this should solve the trouble.

I still have one important suggestion for that: Int 25/26
and int 21.7305 I/O should NOT trigger BUFFERS ALLOCATION.

They should only check whether to-be-accessed sectors ALREADY
are in the BUFFERS cache and make sure the cache gets updated
for disk writes. Disk reads MUST read from cache IF the buffer
is flagged as dirty (not frequent) and MAY read from cache in
other cases.

This will help to prevent cache flooding. There are few buffers
and a single int 25/26 etc. call will otherwise be enough to just
cycle every single buffer through multiple individual sectors
from that one call, which is even SLOWER than the current way
to invalidate ALL buffers for THAT drive on any int 25/26 etc.!

Imagine you have 20 buffers, 5 of which contain sectors from
drive D: and then doing ONE int25 which reads 60 sectors from
that drive. Current approach: 5 buffers SHOULD get invalidated.
Proposed approach: 5 buffers should get checked. NOT proposed
approach: 60 sectors will be put in 20 buffers, splitting the
I/O into 60 tiny I/Os and overwriting EACH buffer 3 times AND
losing contents of the 15 buffers with stuff from other drives.

Our current approach just has ONE simple bug: Invalidation of
buffers is not YET happening BOTH for failed AND succeeding I/O.

So of course, the EASIEST solution would still be to fix that
already existing code to "invalidate all BUFFERS for the drive
which is accessed" (both in int 21.7305 and int2526 context)
to ALWAYS trigger. Instead of the current "only do it if the
disk access has worked" or "only do it if the disk access has
failed". There should be no "only do it if" at all in this.

Of course this impacts performance, but we already do this
anyway. We just do not YET do it in ALL cases where we should.

Regards, Eric

PS: Sometimes it would be faster to handle a large read as-is
IF no dirty buffers are found, instead of breaking it into a
number of segments if only few sectors can be read from cache.

PPS: Good to know that the critical error handler destroyed FS/GS
in 386 kernels! Most of the FS/GS handling has been fixed many
years ago. How much extra stack will the error handler use now?
Could this explain OTHER misbehaving kernel bugs after criterr?




_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to