This message is from the T13 list server.

Thanks Felix S for saying - but may I ask you (& the company here) to
elaborate?

> One good reason for device to termintate UDMA burst is device FIFO full.

Ouch, this I don't understand.

Why did we the committee invent more than one way of inserting delay into a
UDma transfer?

Is the correct nutshell to say the sender can delay sending a clock, the
receiver can ask for a "pause", and either can ask for a "termination"?  And
for the sender is a "pause" anything other than a delay in sending the clock?

Why isn't the spec simpler?  Why not let the sender delay at will, let the
receiver ask for such delay, but "terminate" only after copying the last byte
of data?

Ignorantly, curiously yours,

x4402 Pat LaVarre   [EMAIL PROTECTED]
http://members.aol.com/plscsi/


>>> [EMAIL PROTECTED] 04/05/02 09:07AM >>>

Pat,

One good reason for device to termintate UDMA burst is device FIFO full. As
pausing does not require re-iniating burst transfer, it's a more efficient
way to take a time out for one hundred nanoseconds or so (uDMA5).

regards,
fs

...
Without disagreeing with the claim that pause vs. termination by printed
spec "should not" much matter, back in the real world I too would enjoy
hearing "under what conditions a device would typically terminate a UDMA
burst."
...

Reply via email to