This message is from the T13 list server.

On Tue, 18 Dec 2001 09:50:48 -0700, Pat LaVarre wrote:

>I'm saying, with Atapi Dma, we cannot always distinguish
>intermediate from final bursts until after the burst ends.

It does not matter. The command is not completed until the
device deasserts DMARQ and changes it status to BSY=0 DRQ=0
and perhaps also asserts INTRQ. Up until that time, data is
transferred in as many DMA data bursts as the host and device
think are appropriate. Example: The device might like to do
DMA bursts of 400 bytes each but the host many restrict DMA
bursts to some lessor amount. No big deal... It just works.

>If anyone here thinks somehow we can know in advance which burst
>is final, please tell me how.

The final burst is the final burst for the command, the burst
just before the device indicates the command is completed.

>Given that we cannot know in advance which burst will be the
>final burst, then we cannot know which bursts are intermediate,
>therefore we cannot know if more bytes clocked across the bus
>than the device would have "willingly requested" in Atapi Pio
>mode.

If for a "write" command, especially when using DMA, the host
trys to send more data to the device than the amount of data the
device would want given the SCSI CDB then we have a BAD/BROKEN
HOST.

>With Atapi Pio, the device decides the byte length of each
>burst, thus the total byte length transferred.  With most forms
>of Atapi Dma, the device decides the "word" length of each burst,
>thus the total "word" length transferred.

But the length of each individual burst, PIO or DMA, means
NOTHING.  Only the total number of bytes transferred for the
command might be of some value and that value is not known until
the device says "ok, the command is done".  For a "write" command
both the host and the device should reach the "ok, command is done"
state at approximately the same time so there are no
extra/residual bytes send from the host to the device.

>But with Atapi UDma Out, any time the device agrees to move more
>than zero yet less than the count of bytes the host expected, the
>device does not decide the total "word" length.

The device does determine when it has all the data it needs even
if the host is a BAD/BROKEN host and tries to send more data.
The only unusually thing is that the bus turn around times may
allow the host to send a few more data bytes to the device.
These would be bytes that are not part of the command's data
transfer (as defined by the SCSI CDB).  A host that sends these
extra data bytes is a BAD/BROKEN host (if these extra bytes are
not part of the command's data then what are they?).

>By continuing to
>clock "word"s out during the turnaround time past when the device
>asked to pause, a UDma Out host can force the transfer of an
>extra X "word"s, with max X = 2 for UDma 33 and rising with burst
>rate.

This is true, but only if the host is BAD/BROKEN.

>> a UDma Out host can force the transfer of an extra X "word"s

Yes, and for U-DMA the device must include these bytes in the CRC
but otherwise ignores them.  Yes, the ATA/ATAPI interface is not
robust, there is no requirement that a device detect such a
BAD/BROKEN host and terminate the current command with an error.
ATA devices are not required to report that a BAD/BROKEN host
tried to send it 514 bytes for a one sector write command.  And
the same is true for an ATAPI device.

>After so much discussion, this now seems very plain to me.  I
>can't seem to persuade anyone to answer this directly?  Mostly I
>hear about why I shouldn't care.  On occasion I hear a flat
>denial that I'm seeing what I'm seeing.

I think we have directly answered this several times.  Only a
BAD/BROKEN host can cause this.  And ATA/ATAPI has no requirement
that a device detect such a broken host or report an error when
this is detected.

>An example simulation is:

>  x 3B 0 02 00:00:00 0 01:FD 0 /o x1000

>Here we've told our host to allocate and pin a x1000 byte
>physical page of memory.  Our cb (command block) is a standard
>WriteBuffer of x1FD bytes.  With Atapi Pio, in the bus trace, we
>see x1FD bytes requested.  We see (x1FD + 1) / x2 = xFF clocks of
>data sent.

OK, just what should be expected for PIO (the last byte is a pad
byte).

>The imprecision over which we all agree I think, is that with
>Atapi Dma, we see an indistinguishable xFF clocks of data again
>even if the cb is different, even if we try a standard
>WriteBuffer of x1FE bytes:

>  x 3B 0 02 00:00:00 0 01:FE 0 /o x1000

OK, there is no pad byte.

>The imprecision over which we have not yet agreed is that with
>UDma Data Out we sometimes see x100 or x101 clocks, depending on
>how full the host's fifo was when the device asked to pause and
>then terminate after xFF clocks.

BAD BAD BAD BAD HOST!!!!  BROKEN BROKEN BROKEN HOST!!!!

Why is the host sending more data to the device than what the
SCSI CDB tells the device to expect?  Fix the host.  Or live with
the broken host knowing that for DMA the device will toss the
extra bytes (bytes that are beyond the data for the command so
what are they other than a bunch of junk data bytes from the host
buffer?).

>In short, Ide Dma makes nonzero but unexpectedly short byte
>counts inaccurate by as much as X * 2 + 1, rising with burst
>rate.

Nothing here is inaccurate.  The SCSI CDB said x1fe bytes, the
device expects x1fe bytes, the host should have sent x1fe bytes.
Only a BAD/BROKEN host would send more bytes.


***  Hale Landis  *** [EMAIL PROTECTED] ***
*** Niwot, CO USA ***   www.ata-atapi.com   ***


Subscribe/Unsubscribe instructions can be found at www.t13.org.

Reply via email to