This message is from the T13 list server.


PAUSE works quite simply.  If the sender wants to pause, it stops
transitioning the clock strobe line to the receiver (which normally uses it
to signal the time to latch the contents of the data bus).  The receiver
does not do anything.  Essentially the receiver state machines is waiting to
see a transition on the strobe line, and it does nothing until it sees that.

If the receiver wants to pause then things are a bit more complicated.  The
receiver must signal the pause to the sender (via the HDMARDY- or DDMARDY-
signals).  This signal is very simple in operation - the sender state
machine monitors the signal and pauses transfer when it detects assertion.
It continues in this state until the receiver deasserts the signal, at which
time it can (but does not have to) resume transfers.

This is the simplest flow control protocol you get at this level of
interface.  The only issue that ever arose is that the speed of data
transfer prevents "stopping on the dime" if you are the receiver.  So
receivers have to pause the transfer while they can still receive the words
"in flight."  In SCSI this is a real issue, that can result in hundreds of
bytes in flight.  But given the restrictions specified for ATA, you only
have to worry about 3 words.

So yes, pausing was new with UDMA.  But it is not very difficult to
implement, and is much more efficient than getting out of the DMA burst.

Jim


-----Original Message-----
From: Hale Landis [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 09, 2002 2:20 PM
To: [EMAIL PROTECTED]
Subject: Re: RE: [t13] UDMA Bursts - Pause versus Termination


This message is from the T13 list server.


I was hoping that Jim would respond. But since he has not yet then
I'll ask a few similar questions...

On Mon, 8 Apr 2002 17:10:00 -0600, Pat LaVarre wrote:
>This message is from the T13 list server.
>Why did we the committee invent more than one way of inserting delay into a
>UDma transfer?

I think it is because Ultra DMA was thought of as a "synchronous DMA"
protocol where data is transferred on a regularly occurring clock
signal (U-DMA calls it the "strobe"). And if that clock needs to stop
then there should be a formal method to indicate that state (and we
call that "pause" in U-DMA). I don't think slowing the toggle rate of
the clock was considered to be a valid thing to do... the clock
either runs at the right frequency or it is not running. Is that
right Jim?

>Is the correct nutshell to say the sender can delay sending a clock, the
>receiver can ask for a "pause", and either can ask for a "termination"?
And
>for the sender is a "pause" anything other than a delay in sending the
clock?

In U-DMA, "pause" is a state where the clock signal is not changing.
Is it legal for the strobe to just "slow down" or not toggle for some
period of time without using the "pause" protocol? This seems to be
very unclear in the U-DMA protocol description. Jim?

>Why isn't the spec simpler?  Why not let the sender delay at will, let the
>receiver ask for such delay, but "terminate" only after copying the last
byte
>of data?

Termination is required for reasons other than slowing down the data
transfer rate or reaching the end of a command execution. As
described in my previous email, there are times when PIO activities
may be required during a DMA data transfer command and that requires
that the ATA interface be returned to the "PIO" state and that
requires the current DMA data burst be terminated. Any host adapter
folks care to comment on my previous email about this?



*** Hale Landis *** www.ata-atapi.com ***


Reply via email to