This message is from the T13 list server.
Pat,
Perhaps I have not been clear, but I will be now in response to your
comments:
"> a UDma Out host can force the transfer of an extra X "word"s
"After so much discussion, this now seems very plain to me. I can't
seem to persuade anyone to answer this directly?"
The answer is that no, this can never happen. Is that plain enough? Some
emails may have been confusing on this point, since people were often
engaging in speculation. But no, with a compliant host and device this
simply never occurs. On to your WRITE BUFFER example:
"An example simulation is:
x 3B 0 02 00:00:00 0 01:FD 0 /o x1000
Here we've told our host to allocate and pin a x1000 byte physical page
of memory.
Our cb (command block) is a standard WriteBuffer of x1FD bytes. With
Atapi Pio,
in the bus trace, we see x1FD bytes requested. We see (x1FD + 1) / x2 =
xFF
clocks of data sent."
First, thanks for the concrete example. This is the sort of data you really
need to supply to get answers that will "close the loop" for you. If I am
reading you correctly here, we have a WRITE BUFFER command (3B), with mode 2
(02). In this mode the PARAMETER LIST LENGTH is "the maximum number of
bytes that shall be transferred from the Data-Out Buffer to be stored in the
specified buffer (in the device)." In this case the value the host supplies
is 1FD.
Note that this is the MAXIMUM number of bytes the host can transfer to the
device - if the host transfers more bytes, it is in error (BAD HOST) - this
is all specified in the SPC standard for WRITE BUFFER. The device in turn
is relying on this number for it to determine when a command has finished
executing. Unless the device knows, and can rely on, this number, no
successful SCSI command execution is possible.
In a SCSI environment using a WIDE SCSI bus you would expect to see FF
clocks of data sent (1FE bytes).
In ATAPI PIO mode you would also see FF clocks sent (1FE bytes).
In ATAPI DMA, you would see FF clocks sent (1FE bytes) (this according to
Hale, who I trust on this issue)
In ALL cases you see FF clocks sent (1FE bytes).
In ALL cases the device received the 1FD value in the command parameters.
In all cases the device knows that it must receive FF clocks of 16 bits of
data (1FE) and ignore the last byte in the last clocked word. In no case
does the device fail to operate successfully - it terminates the command
correctly and executes the command correctly. Yes, the SCSI and PIO cases
provide additional, non command information that is accessible to your
"transparent" bridge, but since your job is just to transfer the bytes as
the device and host provides them to you, this is simply not an issue. The
DEVICE, which is the thing that terminates commands, ALWAYS relies on the
command parameters.
So what is the problem? Note that the ONLY WAY to test if WRITE BUFFER is
working correctly or not is to issue one, and then follow it with a READ
BUFFER, and compare the two sets of data. Did your guy do that? What was
the result?
Basically Harlan, Hale, and I are failing to see the problem. The three
different techniques to transfer data (SCSI, PIO, UDMA) all differ at the
lower levels (as you might expect), but all operate correctly at the command
level - which is the only thing that matters. This example does not change
that, since it looks like a perfectly correct execution of WRITE BUFFER.
Jim
-----Original Message-----
From: Pat LaVarre [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 18, 2001 8:51 AM
To: [EMAIL PROTECTED]
Subject: RE: [t13] significant byte counts other than the total
This message is from the T13 list server.
> Harlan Andrews <[EMAIL PROTECTED]> 12/17/01 07:35PM
> ...
> I still don't see the problem.
Again I thank every one here for your remarkably continued patience &
courteous interest. I think/hope we're all focused in the same place now:
all the English I see now is slippery in exactly the right spot.
> "Mcgrath, Jim" <[EMAIL PROTECTED]> 12/17/01 04:39PM
> ...
> In SCSI the device always has responsibility
> for the correct termination of a command
Yes. Yes. Yes.
In direct-attached byte-wide Scsi, the host and the device each decide a max
count of bytes that will move each way. Whenever host & device agree on
direction, an actual byte count that is the min of those two max counts
actually moves. It is the clash of this model with Ide Dma that is giving
me pain. It disturbs me that we don't all agree this clash is as severe as
it looks to me.
> Harlan Andrews <[EMAIL PROTECTED]> 12/17/01 07:35PM
> ...
> >... perhaps we can agree that with UDma
> > we will see more data move than we did with Pio?
> UDMA and PIO should always move exactly
> the same amount of DATA for the completed command.
> You keep talking about:
> byte counts inaccurate by as much as X * 2 + 1, rising with burst rate
> You MUST be confused about the INTERMEDIATE bursts.
I was confused this way in the beginning - but I think I am no longer.
I'm saying, with Atapi Dma, we cannot always distinguish intermediate from
final bursts until after the burst ends.
If anyone here thinks somehow we can know in advance which burst is final,
please tell me how.
Given that we cannot know in advance which burst will be the final burst,
then we cannot know which bursts are intermediate, therefore we cannot know
if more bytes clocked across the bus than the device would have "willingly
requested" in Atapi Pio mode.
With Atapi Pio, the device decides the byte length of each burst, thus the
total byte length transferred. With most forms of Atapi Dma, the device
decides the "word" length of each burst, thus the total "word" length
transferred.
But with Atapi UDma Out, any time the device agrees to move more than zero
yet less than the count of bytes the host expected, the device does not
decide the total "word" length. By continuing to clock "word"s out during
the turnaround time past when the device asked to pause, a UDma Out host can
force the transfer of an extra X "word"s, with max X = 2 for UDma 33 and
rising with burst rate.
> a UDma Out host can force the transfer of an extra X "word"s
After so much discussion, this now seems very plain to me. I can't seem to
persuade anyone to answer this directly? Mostly I hear about why I
shouldn't care. On occasion I hear a flat denial that I'm seeing what I'm
seeing.
Can anyone tell me specifically what is unreal about my simulations?
(I say "simulation" because just now I'm working remotely. A colleague with
a bus analyser saw the same imprecision last week in the real world, but I
think the specific byte counts at issue may have been different.)
An example simulation is:
x 3B 0 02 00:00:00 0 01:FD 0 /o x1000
Here we've told our host to allocate and pin a x1000 byte physical page of
memory. Our cb (command block) is a standard WriteBuffer of x1FD bytes.
With Atapi Pio, in the bus trace, we see x1FD bytes requested. We see (x1FD
+ 1) / x2 = xFF clocks of data sent.
The imprecision over which we all agree I think, is that with Atapi Dma, we
see an indistinguishable xFF clocks of data again even if the cb is
different, even if we try a standard WriteBuffer of x1FE bytes:
x 3B 0 02 00:00:00 0 01:FE 0 /o x1000
The imprecision over which we have not yet agreed is that with UDma Data Out
we sometimes see x100 or x101 clocks, depending on how full the host's fifo
was when the device asked to pause and then terminate after xFF clocks.
In short, Ide Dma makes nonzero but unexpectedly short byte counts
inaccurate by as much as X * 2 + 1, rising with burst rate.
No?
Pat LaVarre
Subscribe/Unsubscribe instructions can be found at www.t13.org.
Subscribe/Unsubscribe instructions can be found at www.t13.org.