This message is from the T13 list server.
> > I'd very much like to connect on this. > > In design, there's nothing like not thinking > > about a problem to make it then appear. > > Help me get straight on the standard first? > > > > Should I believe the claim I hear: > > that Ansi UDma allows the sender (host when writing, > > device when reading) to clock zero, two, or four bytes > > of extra garbage past the terminating pause? > I think you are misunderstanding ... > No garbage data is being sent at any time. This is what I want to hear & believe. Please help me out. I'd like to believe that shipping millions of Ata UDma implementations cleared the way for Atapi UDma. I just haven't heard yet that anyone has paid much attention to the special cases that are dramatically more common with Atapi Pio, that broke back when people wrongly believed that shipping Ata Pio had cleared the way for Atapi Pio. > > > Actually, in you example none of the bytes are valid. > > > PS I'm actually an ATA expert, but > > > I don't remember ATAPI commands behaving any differently. > > I think you have pinpointed many of the areas > > where your experience, ... > > differs from my experience, > There is never a disagreement > on the validity of the data > sent during the transfer > simply because no data sent > by the sender is known by the receiver > to be good until the command has completed successfully. Whoa. The assumption of no data with ERR doesn't cover anything like all of Scsi - though maybe it does cover all of the subset of Scsi that a typical hard drive supports? Consider, for example, the standard option that is the TB bit of mode page 1 of Scsi (aka "transfer block in error"). That bit, when set, says end any read transfer with the raw data that failed ecc for the first block. Speaking in Atapi terms, that spec requires that ERR follows N blocks of valid data. Data with ERR is NORMAL protocol for field applications that examine the geometry of media errors, answering questions like was my cd scratched along a radius or along a track? Data with ERR is also normal protocol for in-house test applications that look to see that all the read data passed back to the host matches what was written, on the theory that otherwise a shorter read in the same time and place might have miscompared quietly. Do hard drives not commonly undergo this test of quality? > Earlier I mentioned that the sender could, > on the last DMA burst, sent out more data > than the command indicated Yes. > that is a broken sender, > since clearly the sender knows the length of the command. No. I say again: The sender only knows which bits of the command it thought meant a count in whatever units. Only out-of-band communication, which practice shows is unreliable, can determine whether the receiver will agree. Let me be blunt: to my eye, to claim this out-of-band communication is reliable is just silly. If you think I'm silly to claim it isn't, then we're living in different worlds, or meaning different things when we use the same words. Reliable communication is not what happens, except when plug 'n play works completely. I imagine plug 'n play works best with the devices distributed most widely that more people understand most deeply - like, say, hard drives with non-removable media - and works less well with lesser well-known stuff. > Reliable communication is not what happens > All I'm hearing so far is that my Atapi question is novel enough, > long enough after Ata implementations have shipped > and pleased millions, that of course my question > must be unimportant. Let's presume it is me who is failing repeatedly to hear something significant here. Can someone give me any theory for why we should believe this open-loop out-of-band communication is reliable? This communication is like a long game of Telephone. You know that game? I talk to someone who talks to someone who talks to someone who ... who talks to you. This chain reaches from the app - let's say something shipping in binary code only, written in 1982. and running in a Windows dos box - down into the firmware of the drive. Yea, this path is reliable if the same vendor owns host & device and tests everything they care about. Yea, this path can be reliable when it only conveys commands well-standardised before the host and the device both shipped. Yea, this path can look more reliable if at both ends people assume any unknown command doesn't move data. But I know a plethora of specific defects in that long path that have hurt real people in the real world. Accurately reverse engineering and working around those defects is a large part of why I get paid. Those defects are all about competitive edge - like yea you get a drive letter and so do I but I can burn at 6X and you choke above 2X, nyah nyah. > Pio ... UDma ... The generic, commodity, transparent UsbMass bridges of Usb1 to AtapiPio were a blessing in part because the less non-compliant bridges worked to limit how much chaos resulted from unreliable communication of the count and direction in which data moved. Specifically, the basic generic UsbMass protocol REQUIRES the bridge to report when the device tries to move more data than the host expected and to report specifically how many bytes did pass thru the bridge. I'm trying to understand how firmly we can rely on those defences against chaos holding firm as we upgrade from Usb1/Pio4 to Usb2/UDma. I think I'm hearing we'd better not rely firmly on UDma: no form of Ide Dma supports counting odd bytes, and Udma indeterminately injects 0, 1, or 2 byte pairs of indeterminacy into the count, because the standard UDma protocol can't quite decide if the device is or is not moving more or less data than expected. Please tell me I'm wrong? Thanks again in advance. Pat LaVarre Subscribe/Unsubscribe instructions can be found at www.t13.org.
