This message is from the T13 list server.

I was hoping that Jim would respond. But since he has not yet then
I'll ask a few similar questions...

On Mon, 8 Apr 2002 17:10:00 -0600, Pat LaVarre wrote:
>This message is from the T13 list server.
>Why did we the committee invent more than one way of inserting delay into a
>UDma transfer?

I think it is because Ultra DMA was thought of as a "synchronous DMA"
protocol where data is transferred on a regularly occurring clock
signal (U-DMA calls it the "strobe"). And if that clock needs to stop
then there should be a formal method to indicate that state (and we
call that "pause" in U-DMA). I don't think slowing the toggle rate of
the clock was considered to be a valid thing to do... the clock
either runs at the right frequency or it is not running. Is that
right Jim?

>Is the correct nutshell to say the sender can delay sending a clock, the
>receiver can ask for a "pause", and either can ask for a "termination"?  And
>for the sender is a "pause" anything other than a delay in sending the clock?

In U-DMA, "pause" is a state where the clock signal is not changing.
Is it legal for the strobe to just "slow down" or not toggle for some
period of time without using the "pause" protocol? This seems to be
very unclear in the U-DMA protocol description. Jim?

>Why isn't the spec simpler?  Why not let the sender delay at will, let the
>receiver ask for such delay, but "terminate" only after copying the last byte
>of data?

Termination is required for reasons other than slowing down the data
transfer rate or reaching the end of a command execution. As
described in my previous email, there are times when PIO activities
may be required during a DMA data transfer command and that requires
that the ATA interface be returned to the "PIO" state and that
requires the current DMA data burst be terminated. Any host adapter
folks care to comment on my previous email about this?



*** Hale Landis *** www.ata-atapi.com ***



Reply via email to