This message is from the T13 list server.



Now hold On. When I put those proposals in front of the t13 Committee few
years ago and emphasized that a drive could and should be able to test its
self. That includes the microprocessor and its hardware. I can't quite
remember the negative comments about the ideas and the insults. But what can
you expect. Something's never change.

I think the large sector size work and investigation  was done by another
group. I think Gene was part if that.
 



On 3/12/02 3:23 PM, "Hale Landis" <[EMAIL PROTECTED]> wrote:

> This message is from the T13 list server.
> 
> 
> On Tue, 12 Mar 2002 13:54:13 -0800, McGrath, Jim wrote:
>> This message is from the T13 list server.
>> 
>> going forward it's not even clear that ECC will be linked with 512 byte
>> sectors.
> 
> When I make my first "large physical sector" proposal I asked the T13
> members to think about this and how they would continue to support
> R/W Long testing.
> 
> Lets assume the current R/W Long scheme (Harlan's implementation or
> something similar) is used but the sectors are 4K bytes and there are
> up to 500 bytes of ECC data. Anyone want to estimate how long it
> would take just to walk a single "bad bit" though such a sector+ECC.
> Then how about walking a 2 "bad bits"? Or combinations of multiple
> errors? I'm not sure any of us would live long enough to see such a
> test complete (even if you could find a computer system that could
> run that long!).
> 
> A question for those of you that buy disk drives... When you purchase
> a disk drive don't you assume that someone has verified that the
> drive's microprocessor(s) and buffer memory function correctly?
> (There has never been a way to test a microprocessor in a drive from
> a host system.) If you need to test a drive's ECC logic from the
> host, then why don't you also need to test the drive's microprocessor
> and buffer memory too from a host? Why the "double standard"?
> 
> 
> 
> *** Hale Landis *** www.ata-atapi.com ***
> 
> 
> 

Reply via email to