2011/4/2 Benny Lofgren <bl-li...@lofgren.biz>

>
> I've noticed that some (all?) linux systems do uncalled-for file system
> checks at boot if no check have been made recently, but I've never
> understood this practice. It must mean they don't trust their own file
> systems,
>

I'm quite sure this comes from the fact that there are several ways for a
ext file system to get errors (which in bash used to show up as
"input/output error" when you try to reference the file) but the filesystem
will not store the error condition anywhere, so if you make a clean
shutdown, and reboot, the fsck will not know that a fsck in due, and skip
over it, and for that whole session until the next reboot, the file is still
as inaccessible as before.

And since only root may write the "magic" file in the (broken) filesystem
root, a normal user can not force the fsck either, unless he kills the power
switch so the boot scripts know there was an unclean shutdown before, OR,
reboot 147 times (or whatever the intervals may be) so the system does run
the fsck at boot.

I dont pretent to know the optimal solution for keeping track of "hey, I
just told the user his file is corrupt, I should ask for fsck on the next
mount" but even the early-80s amiga floppy file systems would have a global
"dirty" flag so the OS would launch disk validator next time you inserted
the disk and "mounted" the filesystem if you found out it had some kind of
read/write error.

Letting users run 1-146 reboot cycles without checking even when you know
stuff is broke is horrid. And having a file inside the actual filesystem to
indicate "if this file isnt deleted it means something" as an inverse flag
really doesnt count (/fastboot or whatever) since if half your files
disappear and that one went also, then its missing status would indicate
"everything is fine".

-- 
 To our sweethearts and wives.  May they never meet. -- 19th century toast

Reply via email to