This message is from the T13 list server.

 
I think you are misunderstanding.  The sender is NEVER allowed to send
"garbage" bytes for a command in which the command execution is expected to
be a success (i.e. as soon as it does send any such bytes, it must then
follow up with ultimately indicating the the command failed, and thus is
then usually retried).

The sender can always pause the transfer whenever it wants.  But it must
pause it when it runs out of user data to send, or is informed by the
receiver that there is no space in the receiver to hold the data.  When the
condition that generated the pause is cleard, then data transfer can resume.

A receiver can always force a pause whenever it wants.  But it must do so
when it runs out of space to store the data.  Since timing conditions
prevent the ability to insure that pauses are "on the dime," the receiver
must, under some circumstances (all specified in the protocol) pause when it
still has a few bytes of fifo space left.  A lot of designs just always
pause when there is some space left to make sure that no over run occurs.

No garbage data is being sent at any time.   There is never a disagreement
on the validity of the data sent during the transfer simply because no data
sent by the sender is known by the receiver to be good until the command has
completed successfully.  

Earlier I mentioned that the sender could, on the last DMA burst, sent out
more data than the command indicatd - but that is a broken sender, since
clearly the sender knows the length of the command.  To my knowledge no
device ever does that as sender.  I'll leave it to others to comment on
actual host sender bahavior, but I don't know of any cases where the host
would be broken in this manner either.

People may be confusing this with the case of the host a receiver - in this
case it is not uncommon for the host software to not program the DMA
controller with the command size, counting on the device (as sender) to
always stop at the appropriate point.  This works as long as the device
works, but is clearly not good defensive programming.

Is this line of inquiry for understanding, or because a real problem has
been encountered?  I've never seen a problem like this with any shipping
implementation of UDMA - and it has been shipping for years now.

Jim


-----Original Message-----
From: Pat LaVarre
To: [EMAIL PROTECTED]
Sent: 12/2/01 7:31 AM
Subject: Re: RE: [t13] yea, UDma doesn't count bytes well

This message is from the T13 list server.


> I don't understand ...
> ... ill founded.

I'd very much like to connect on this.  In design, there's nothing like
not thinking about a problem to make it then appear.

I'm sorry I was unclear, thank you for explaining that I was.  I find
your specific points clarifying - please consider giving me yet another
chance if you find my replies otherwise.

> really talking
> about the host SENDING more bytes
> than was specified in the command.

> PS I expect that if a host on an OUT
> ever tried to send out more data
> than the drive expected,
> that the result would be an error posted by the drive.

Yes I'm looking to understand exactly this case.

I also want to understand the symmetric case: what happens when a device
on an IN tries to send more data than the host expected.

> The only problem occurs if the host/device fail
> ... to halt the last burst in the command
> (the only one that has to stop at a certain point)
> at the right point.

Yes, except for this case, people more or less clearly can design
Ansi-compliant hosts & devices to avoid disagreements over how many
bytes moved which way across an Ide bus.

> ... possible that device designers
> will allow this
> just ignore the overflow
> (since the CRC check is still valid).

Help me get straight on the standard first?

Should I believe the claim I hear: that Ansi UDma allows the sender
(host when writing, device when reading) to clock zero, two, or four
bytes of extra garbage past the terminating pause?

Or do people just commonly in this way disregard the text we (T13)
published?

> ... just commonly ...

I hear of host UDma receivers have shipped that neglected to include
such garbage in their Crc.  I hear this because by shipping they then
flushed out devices that id clock garbage past the terminating pause.

By now I think I've heard of three chips that have revved to change how
they handle garbage clocked past the terminating pause.

> Under all conditions,
> both the host and the device
> know ... the number of clocks).
> And if for some reason they did not
> (i.e. a double clock),
> then that is what the CRC is for
> - they will not match.

Yes, that part works wonderfully, thank you.

Here UDma has an advantage over Pio/SwDma/MwDma.  Because of the Crc,
the host and device can believe they agree on how many clocks of byte
pairs they both saw.

> both the host and the device
> know how many bytes are sent
> (they get this from counting the number of clocks).

They can't get an accurate byte count from a clock count if the protocol
neglects to say if the last 0, 1, or 2 clocks move data or garbage.

> Note however that in both cases
> the protocol is not at fault
> - no one (especially the standard) ever said
> that the host or device could ignore
> the number of bytes in the command!

I'm not sure I understand.

UDma, like any other Ide data transfer mode, leaves the decision of
which bits of the command mean a count of bytes moving which way in what
units entirely to out-of-band communication.

I mean to be talking about how helpful the Atapi UDma protocol is or is
not when that out-of-band communication is unreliable, when compared
with the gold standard - the fully adequate standard - that is Atapi
Pio.

> I don't know of DMA systems
> where you don't program byte counts!

Good to hear, thank you.  Me neither.

All the more ironic that SwDma/MwDma/UDma all share the defect of
neglecting to provide the Dma engine with a way to convey this signed
byte count to the device so that someone on earth could see both the
host count and the device count simultaneously.

Core FireWire & Usb fix this.  There, every core communication -
standard or vendor-specific - shares a common scheme for transmitting to
the device the sign and manitude 
Subscribe/Unsubscribe instructions can be found at www.t13.org.
Subscribe/Unsubscribe instructions can be found at www.t13.org.

Reply via email to