This message is from the T13 list server.

> I don't understand ...
> ... ill founded.

I'd very much like to connect on this.  In design, there's nothing like not thinking 
about a problem to make it then appear.

I'm sorry I was unclear, thank you for explaining that I was.  I find your specific 
points clarifying - please consider giving me yet another chance if you find my 
replies otherwise.

> really talking
> about the host SENDING more bytes
> than was specified in the command.

> PS I expect that if a host on an OUT
> ever tried to send out more data
> than the drive expected,
> that the result would be an error posted by the drive.

Yes I'm looking to understand exactly this case.

I also want to understand the symmetric case: what happens when a device on an IN 
tries to send more data than the host expected.

> The only problem occurs if the host/device fail
> ... to halt the last burst in the command
> (the only one that has to stop at a certain point)
> at the right point.

Yes, except for this case, people more or less clearly can design Ansi-compliant hosts 
& devices to avoid disagreements over how many bytes moved which way across an Ide bus.

> ... possible that device designers
> will allow this
> just ignore the overflow
> (since the CRC check is still valid).

Help me get straight on the standard first?

Should I believe the claim I hear: that Ansi UDma allows the sender (host when 
writing, device when reading) to clock zero, two, or four bytes of extra garbage past 
the terminating pause?

Or do people just commonly in this way disregard the text we (T13) published?

> ... just commonly ...

I hear of host UDma receivers have shipped that neglected to include such garbage in 
their Crc.  I hear this because by shipping they then flushed out devices that id 
clock garbage past the terminating pause.

By now I think I've heard of three chips that have revved to change how they handle 
garbage clocked past the terminating pause.

> Under all conditions,
> both the host and the device
> know ... the number of clocks).
> And if for some reason they did not
> (i.e. a double clock),
> then that is what the CRC is for
> - they will not match.

Yes, that part works wonderfully, thank you.

Here UDma has an advantage over Pio/SwDma/MwDma.  Because of the Crc, the host and 
device can believe they agree on how many clocks of byte pairs they both saw.

> both the host and the device
> know how many bytes are sent
> (they get this from counting the number of clocks).

They can't get an accurate byte count from a clock count if the protocol neglects to 
say if the last 0, 1, or 2 clocks move data or garbage.

> Note however that in both cases
> the protocol is not at fault
> - no one (especially the standard) ever said
> that the host or device could ignore
> the number of bytes in the command!

I'm not sure I understand.

UDma, like any other Ide data transfer mode, leaves the decision of which bits of the 
command mean a count of bytes moving which way in what units entirely to out-of-band 
communication.

I mean to be talking about how helpful the Atapi UDma protocol is or is not when that 
out-of-band communication is unreliable, when compared with the gold standard - the 
fully adequate standard - that is Atapi Pio.

> I don't know of DMA systems
> where you don't program byte counts!

Good to hear, thank you.  Me neither.

All the more ironic that SwDma/MwDma/UDma all share the defect of neglecting to 
provide the Dma engine with a way to convey this signed byte count to the device so 
that someone on earth could see both the host count and the device count 
simultaneously.

Core FireWire & Usb fix this.  There, every core communication - standard or 
vendor-specific - shares a common scheme for transmitting to the device the sign and 
manitude 
Subscribe/Unsubscribe instructions can be found at www.t13.org.

Reply via email to