This message is from the T13 list server.


Pat,

You have to allow for multiple bursts of DMA.  There are lots of good
reasons for this, but the real reason is that the host has to be able to
terminate the burst whenever host side software wants to take a look at the
ATA bus.

Think about it.  You cannot read STATUS and do a DMA burst at the same time.
So whenever the host tries to do a PIO operation (under the control of host
software), either the host hardware has to make up the returned data
(obviously not very good) or terminate the DMA burst.  And if the burst is
terminated, then you really don't want to abort the command (since you had
no idea why the host did the PIO, and it could easily keep on doing it over
and over again).

In PIO data transfers you have a natural stopping point between blocks, and
the host CPU is actually busy transferring the data anyway - do data
transfer and random PIO reads are interlocked, since the software on the CPU
is doing both of them.  But there are simply not good interlocks between the
host DMA hardware and the CPU executed driver software - so we had to make
sure that the system continued to work in these cases.

Notice that this has nothing to do with UDMA, but instead has been how ATA
DMA has operated from the very beginning.  Over time we're REDUCED the
average number of DMA bursts used per command, but the architecture always
has made provision for multiple bursts per command.

You could argue that the device should not terminate the burst.  Indeed,
most devices will have very long bursts.  But for devices it is possible to
get into a situation where the burst could be suspended for milliseconds
rather than nanoseconds.  Here freeing up the ATA bus for things like
overlapping makes a lot of sense.  Indeed, the whole overlapping/queuing
protocol makes multiple bursts make a lot of sense (think of them as
multiple connections in SCSI).

Given this, the only thing you could get rid of is PAUSE.  The problem there
is that most disruptions in the data flow are for very short periods of time
- terminating the DMA burst and restarting it all of the time for slight
buffer flow irregularities would be an inefficient use of the bus (it can
take on the order of 600 ns to do a complete terminate/resume cycle, which
is about 80 bytes of data time).  Note that PAUSE is actually new to ATA DMA
with the UDMA protocol.  It was the higher speeds envisioned, and the more
complicated burst initiation/termination protocol that made the use of PAUSE
make sense.

Jim


-----Original Message-----
From: Pat LaVarre [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 08, 2002 4:10 PM
To: [EMAIL PROTECTED]
Subject: Re: RE: [t13] UDMA Bursts - Pause versus Termination


This message is from the T13 list server.


Thanks Felix S for saying - but may I ask you (& the company here) to
elaborate?

> One good reason for device to termintate UDMA burst is device FIFO full.

Ouch, this I don't understand.

Why did we the committee invent more than one way of inserting delay into a
UDma transfer?

Is the correct nutshell to say the sender can delay sending a clock, the
receiver can ask for a "pause", and either can ask for a "termination"?  And
for the sender is a "pause" anything other than a delay in sending the
clock?

Why isn't the spec simpler?  Why not let the sender delay at will, let the
receiver ask for such delay, but "terminate" only after copying the last
byte
of data?

Ignorantly, curiously yours,

x4402 Pat LaVarre   [EMAIL PROTECTED]
http://members.aol.com/plscsi/


>>> [EMAIL PROTECTED] 04/05/02 09:07AM >>>

Pat,

One good reason for device to termintate UDMA burst is device FIFO full. As
pausing does not require re-iniating burst transfer, it's a more efficient
way to take a time out for one hundred nanoseconds or so (uDMA5).

regards,
fs

...
Without disagreeing with the claim that pause vs. termination by printed
spec "should not" much matter, back in the real world I too would enjoy
hearing "under what conditions a device would typically terminate a UDMA
burst."
...

Reply via email to