This message is from the T13 list server.
Pat, Part of the problem here is that you keep on insisting that the receiver "requests" bytes during a data transfer from a sender. The problem is that you keep on referring to ATAPI/PIO as the model, where actually it is a clever hack. The reason why the receiver (bridge) needs to know the number of bytes to transfer from the sender (device) in this case is due to the oddity of the receiver providing the clocking for the sender's data (which is how PIO works in DATA IN). This is quite abnormal in any high speed bus application with significant turnaround times on the bus for the simple reason that it adds timing to each byte transferred. Note that SCSI in particular has always worked from the convention that the sender of the data also provides the clock for the data. This is what UDMA does as well. As with SCSI, with UDMA there is simply no need now for the receiver to have any preknowledge of the number of bytes to be transferred (since it no longer has to provide the clocks). The receiver is acting totally as a slave to the sender, and just latches (not clocks) in the data as the sender sends it. Indeed, the only real time issue the receiver has to confront is the issue of buffer management and flow control - which is why UDMA has the PAUSE and STOP capabilities. Note that this level of flow control is done in hardware, and obeys very simple rules. The only thing that UDMA brings to the picture that some other protocols do not is the need for the receiver to take care that it always has a few bytes of FIFO empty to allow for cases where the sender may not see the PAUSE request in time. This is a simple to handle issue. So except for the possible odd byte residue problem, I still cannot see why there is a problem. Today in ATAPI/PIO the receiver in DATA IN gets the byte count explicitly from the device in the form of register values. The receiver then uses that to generate the correct number of PIO clocks. In ATAPI/UDMA the receiver gets the byte count from latching the clock transitions from the sender. The sender is sending the clocks, so the receiver does not have to worry about that. In either case the receiver always knows the number of bytes sent, and the information is ultimately always derived from actions of the device. The bridge can rely on that value just as well for either protocol - it just gets it in different ways. Jim -----Original Message----- From: Pat LaVarre [mailto:[EMAIL PROTECTED]] Sent: Tuesday, December 11, 2001 6:18 AM To: [EMAIL PROTECTED] Subject: Re: [t13] to Dma from Pio: more Atapi clocks sent? This message is from the T13 list server. > [EMAIL PROTECTED] 12/10/01 21:34 PM > I think my previous email > basically covers this but.... Me, I do think I see something new here ... > > The bridge will send too many clocks > > any time its host tells it only > > how many bytes of buffer > > for data out were allocated, > > rather than the smaller number of bytes > > that should pass thru. > Then this is a broken "host"... BAD HOST! Ahh. Fun. This is slippery in exactly the right area. Lets turn back to Ata, not Atapi, for a moment. Suppose a direct-attached Ata host moves out too many blocks for a vendor-specific Ata command like op xFA. To which standard do we point to say this host is a BadHost? Suppose the host was built in layers: a bottom layer written by Microsoft, the upper layer that composed by the command block written by the device vendor. Can we say the Microsoft layer is broken? Surely not: it's just accurately passing thru what the vendor's layer asked to have go thru? Not if a hardware Usb/Ide bridge is transparent enough to let a Usb host create the identical Ide bus trace, can we say the Usb/Ide bridge is broken? I don't think so. We have to say the host that composed the command block and associated a length and direction of data with it is broken. The bridge is just accurately simulating what would happen if that same host were more directly attached to the device. > Attempting to write more data ... Whoa. Agreed the bridge is broken per Ansi Ata/pi only if it sends more clocks than by protocol the device appeared to request. Agreed the results in Pio/SwDma/MwDma are particularly indeterminate, since they lack the check of a Crc. But can we say the bridge is broken per Ansi Ata/pi if it just sends more clocks than were wanted, simly because they were apparently requested? For example, if an MwDma device turns DMARQ around too slowly to avoid requesting x202 bytes rather than x200 bytes, is the host broken to clock out the extra pair of bytes? If a UDma device turns around too slowly to avoid requesting x204 bytes rather than x200 bytes, is the host broken to clock out an extra 4 bytes? Maybe you want to say yes ... but according to specifically what portion of what standard? Even with Ata rather than Atapi, clocking out as many as R + X * 2 + 1 bytes is correct protocol for such cases as write errors that cut the data transfer short. No? > could be a hung host > (a host that thinks more data > will be transferred and is not expecting > the device to end the command). Ouch. I have heard of Dma hosts that wait for a timeout to discover the device did cut the data transfer short. This occurs much too commonly in Atapi to let a bridge easily be so fragile, though any particular host that composes command blocks can/should work to avoid this situation. Pat LaVarre Subscribe/Unsubscribe instructions can be found at www.t13.org. Subscribe/Unsubscribe instructions can be found at www.t13.org.
