On Thu, Jan 06, 2005 at 05:29:39PM +0100, Spam wrote:
> 
> > It's a risk assessment.  What are the odds of your normal data sets
> > hitting the bug or of someone with malicious intent introducing
> > a demonstration program vs the performance hit of a filesystem
> > without the problem.
> 
>   How can I assess the risk, if I do not know how to produce the bugs?
>   You say certain conditions. But from what I read earlier in the
>   thread, a directory with a fonts in them.....?

Since the concepts of simulators, hash function analysis, and dataset
modelling seem to escape you, perhaps you need to go for the black
and white "Is any risk acceptable," given anecdotal data of one
unexpected failing condition and one script that can regularly create
the failing condition.
> 
> > All filesystems will fail or suffer degraded performance under
> > certain conditions, you need to determine what conditions are acceptable
> > for your data.
> 
>   Slow can be acceptable. But failing? No, a filesystem should not
>   fail.

It should not fail 
1) When media fails
2) When transport hardware is not compliant with specs (permanently on
write caching anyone?)
3) Media has a limited lifetime
...

One thing I don't think I ever saw in this thread was
1) How old was the drive that saw the problem.
2) What was the drive lifetime used to calculate it's MTBF.
-- 
Chris Dukes
Warning: Do not use the reflow toaster oven to prepare foods after
it has been used for solder paste reflow. 
http://www.stencilsunlimited.com/stencil_article_page5.htm

Reply via email to