This message is from the T13 list server.

 
Pat,

It might make sense to discuss this offline, since I suspect that the email
bandwidth and response time is not allowing us to close on the issues.  I'm
still not even understanding the problem you are highlighting.

For instance, there are NEVER garbage bytes in a working (as opposed to a
botched) implementation of UDMA.  And any implementation so botch would
probably fail almost immediately, and so never be shipped.  These "extra
bytes" are no more or less than the next bytes of user data that would be
sent anyway after the PAUSE condition was removed - they are not "garbage".
No fixing of the data stream is required.  And as long as the sender knows
the byte count for the command (not the burst), then it will never sent any
more data than the command calls for.  Note that all of this is not unique
to UDMA - this issue was first seen in DATA IN for Multiword DMA years ago,
for similar timing reasons.

As for the command information, that is the same information, in the
location (the command registers, or similar location in the packet) as for
PIO or Multiword DMA.  So I don't see how you can feel happy about PIO but
have problems with DMA - its using the same command mechanisms in either
case to send the information (especially transfer length) to the device.  At
higher levels of software you can always have bugs urelated to ATA at all,
but I fail to see the ATA issue in that.

Finally, there has never been any assurance that the data transfered is
correct in whole or in part for PIO or DMA, or for ATAPI commands using PIO
or DMA.  For errors for which a location can be identified, you can check
various ending status data to find a location.  For some errors you cannot
so isolate the error (e.g. CRC), and so cannot get that information.

Since UDMA was designed to be a drop in replacement for PIO, there really
are not any differences in the implementations at the command protocol level
(besides the obvious syntax differences) except that the DMA protocol does
not require the sector (or block) checking of STATUS.  But that "feature"
does not have any use to the system, especially since the command never has
to terminate on an error even in PIO mode.  Only the status information
available at the end of the command can be relied on.

So I'm still not seeing the problem.  And I never ran into one when working
on a USB-ATA bridge using UDMA (which did support ATAPI commands).  Like I
said, maybe an offline conversation would prove more productive.  

Jim




-----Original Message-----
From: Pat LaVarre
To: [EMAIL PROTECTED]
Sent: 12/3/01 4:38 PM
Subject: Re: [t13] UDma count well NOT

This message is from the T13 list server.


> Larry Barras <[EMAIL PROTECTED]> 12/03/01 03:26PM

Thanks for speaking - on any reflector, I much prefer to see more than
Two speaking, here now I think I've seen Four speak.

> doesn't seem like a great example,
> but I'll play your game a bit here anyway.

> From what I can gather, you are unhappy
> because the udma burst protocol
> doesn't indicate precise byte counts.
> I'll concede that you are correct,

Ouch.  I am NOT misunderstanding?  This is true?

We stand by "Ansi UDma allows the sender (host when writing, device when
reading) to clock zero, two, or four bytes of extra garbage past the
terminating pause"?

> the host-side DMA hardware
> *should* be capable
> of indicating with a great degree of  precision
> the amount of data that has transferred.

Yes.

An uncelebrated discriminating factor in the quality of your quick-turn
UDma silicon is: can your hardware not only count byte pairs but can it
also separately report the count of clocks, if any, that occurred past
the terminating pause.

This matters because "If somehow magically the host & the device can
establish that the sender of the data will never be so rude as to clock
garbage, then both the host & the device can count byte pairs clocked.
Then they both can know how many byte pairs the receiver moved."

> the host-side DMA hardware
> *should* ...
> This should be true of a personal computer
> or any protocol bridging device
> serving as host to the ATA/ATAPI drive.

Yes.

> the host-side DMA hardware

I'd say the receiver/sender symmetry of UDma implies what we need in
host-side hardware we need likewise in device-side hardware?  Of course,
a bridge to Ide can fix only the host-side hardware.

Specifically, if when writing we need the host to never send garbage
write data so that we can know the device did not receive it, then
likewise when reading we need the device to never send garbage read data
so that we can know the host did not receive it.

> The fact that there are DMA controllers
> which lack this capability and systems
> that have chosen to use them
> is another argument altogether.

Ok.

> If someone is building a bridge
> between different protocols,
> then it is their job to perform
> all of the translation
> between the different capabilities
> and error reporting/recovery protocols
> between the  devices.

In theory, sure.

> Unfortunately, the world at large
> has been blessed
> with a variety of  bridge chips
> that work with variable predictability.

Yes.

And if UDma throws indeterminate byte counts into the mix, this will get
worse.  Look at the trouble I'm having here persuading people that byte
counts matter.

If even the people who care enough to pay the cost of participating in
talk like this can't easily believe how real a problem this is, how
likely are bridge and device makers at large to look out to take care of
it?

More specifically, let's suppose bridging to AtapiUdma can be as
transparent as bridging to AtapiPio except when the out-of-band
communication of the host's expected and the device's actual count of
bytes is incorrect.

Who then will include, in a bridge, the cost and interoperability risk
of trying to interpret Atapi commands?

Noone.

Pat LaVarre


Subscribe/Unsubscribe instructions can be found at www.t13.org.
Subscribe/Unsubscribe instructions can be found at www.t13.org.

Reply via email to