On Wed, Jan 02, 2013 at 08:48:50PM -0500, Alexander Spitzer wrote:
> How can the consequences of a failing drive appear so regularly and
> precisely? I thought that as a solid state drive fails, the available space
> becomes smaller and smaller as bad sectors appear. And the capacity has
> noticeable decreased from the original 4 GB, it is now around 3.7. If it
> means anything, running the config.status script with the debug flag leaves
> a temporary directory intact. In there are several files with STUFF in
> them. Perhaps they are not properly copied to where they are supposed to go
> (Makefile etc.)?
> 
 On solid state, I believe that you will get bad erase blocks -
these might be large, e.g. 16 MB.  Once that starts to happen in
/bin, /usr/bin, /lib. or /usr/lib then I assume some of the programs
(those with code, or linked to libraries, in those blocks) will no
longer work correctly.  I would expect errors to show, but maybe
not:

 In theory, things (even "spinning rust" drives) fail in predictable
ways.  Practice suggests otherwise (I've got a failing disk where
smartmontools reports an increasing number of unreadable sectors,
but insists the drive is not failing).  I've never used solid-state
except as film cards (and only updated by the camera), and as vfat
sticks where I've copied compressed tarballs so that ownership and
perms of the files within the tarball are maintained.

 There were some interesting comments on (cheap) solid-state
(internal) drives on The Register in the last couple of weeks -
basically, many (used as system drives in windows, in most reports)
died suddenly with exotic failures.  Other people had no problems,
so partly these failures involve bad luck (buying the equivalent of
what we used to call a "friday afternoon car" - one rushed through
without any attention - back in the 1970s when we had a car industry
in the UK).

 You already have more experience of this than I do, so please feel
free to ignore my suggestions and prejudices.  But in that case, you
need to come up with your own theories about what is happening -
nobody else on the LFS/BLFS lists has ever reported this sort of
failure, AFAIK.

 Your system, your rules.

 If you want to debug it, perhaps you could mount a partition from
the host system somewhere under your LFS, then untar a package in
both, compare the outputs of ls -lR, run configure in both, then
again compare the outputs from ls -lR.  I suspect this (provided you
can mount the conventional disk - /mnt seems a likely existing
directory to try ) will show that the runs on the spinning rust are
now a lot more reliable.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to