This message is from the T13 list server.

[ BC [EMAIL PROTECTED] ]
 
> > Has anyone ever seen devices which exercise burst termination during a 
> >UDMA-in transfer other than at sector boundaries,
 
ATA devices?  I haven't looked much.  ATAPI devices? Yes.
 
1) Trivially, ATAPI UDma In by definition terminates at "word" (i.e. even byte) 
boundaries, not just at sector boundaries.
 
2) In the lab, I've seen devices designed to terminate UDma data transfer In or Out at 
random "word" boundaries.  These devices exist specifically to ask the question: can 
the host, such as a bridge to ATA/PI, mostly handle this case.  Often the plain answer 
in the real world is No.
 
Accordingly, I'd term such device behaviour "legal but rude".  Such device behaviour 
is natural when implementing commands like ATAPI ModeSense, that copy in a stream of 
variable-sized chunks of data: "header", "page", "page", ...  For that command, to 
baby the host, the device would have to reserve a sector of memory to buffer bytes 
before bursting them into the host.  Plus more memory for the largest "page" the 
device needed to view contiguously.
 
3) In the field, shipping a device that does choose to burst other than whole sectors 
in effect searches for hosts that choke over this.  Hopefully that search won't be 
fruitful ... but I hear it is, forming part of what leads to the abomination of 
involving end users in the decision to use UDma or not.
 
> or know of conditions 
> under which this might happen?
 
Even with ATA, this can happen by accident if the chip folk don't think to work to 
avoid it.  Often there are small FIFOs between the bus and and whole-sector buffer and 
between a whole-sector buffer and the disk.  Get a little sloppy with your 
watermarking algorithm that chooses when to start up and when to shut down transfers, 
and the bus trace will show you stopping at other than sector boundaries.
 
> would the device simply then
 
Back when I did chip design, I ran across employed people who thought that any case 
the spec did not specify was a don't care.  They actually fed this don't care into 
optimising synthesis tools that really did then produce seemingly arbitrary behaviour. 
 The most dramatic example I saw was a chip that produced random noise on its i/o 
during the don't care intervals of time.  I got the lead engineer to come in and 
change that: but the less senior engineer whom I at first failed to persuade didn't 
appreciate being overriden.
 
Pat LaVarre
 
-----Original Message----- 
From: Andrew Marsh [mailto:[EMAIL PROTECTED]] 
Sent: Thu 8/8/2002 10:46 AM 
To: [EMAIL PROTECTED] 
Cc: [EMAIL PROTECTED] 
Subject: Re: [t13] UDMA-in burst termination



        This message is from the T13 list server. 


        [EMAIL PROTECTED] wrote: 
        > 
        > On Thu, 08 Aug 2002 17:16:24 +0100, Andrew Marsh wrote: 
        > >This message is from the T13 list server. 
        > > Has anyone ever seen devices which exercise burst termination during a 
        > >UDMA-in transfer other than at sector boundaries, or know of conditions 
        > >under which this might happen? 
        > 
        > I'll answer the first of your questions... 
        > 
        > This is yet another thing that is very unclear in the ATA/ATAPI-x 
        > (and has been discussed here many times and I get a lot of email 
        > about it too). 
        > 
        > A DMA data transfer command may require any number of DMA burst to 
        > transfer the data. There is no limitation on the size of a DMA burst 
        > (other than meeting the protocol requiresments). There is *NO* 
        > requirement that DMA bursts start of end at sector boundaries. A host 
        > or a device may terminate the current DMA burst at any time for any 
        > reason. x86 PCI hosts frequently do this due to PCI bus activities 
        > and/or shortage of buffer/FIFO space. 
        > 
        > *** Hale Landis *** www.ata-atapi.com *** 

        Thanks, I am aware that both device and host are entitled to terminated 
        a burst at any point during the transfer command (within the protocol 
        requirements) and that this does not have to occur on sector boundaries. 
        What I am interested to know is from any device manufacturer what (if 
        any) internal reasons may give rise to a termination occuring after the 
        transfer of an arbitary number of words (ie non-sector multiples, or 
        more unusually after an "odd" number of words?) 

        I guess this, and my second question (below) are more aimed at device 
        manufacturers: (apologies if this is not the right place to be raising 
        these questions) 

        Also, from ... 

        "9.13.3.2 Host pausing an Ultra DMA data-in burst 
         a) The host shall not pause an Ultra DMA burst until at least one data 
        word of an Ultra DMA burst has been transferred." 

         ... can anyone tell me what is the likely response of a drive to a host 
        that breaks this rule by asserting HDMARDY-, then before the first 
        negation of DSTROBE, (within tFS) negates HDMARDY- ? 
        i.e Host tries to pause transfer immediately following the initiate 
        data-in burst sequence. 

         I assume it might continue to the first negation of DSTROBE, then a 
        further zero, one, two or three DSTROBES (within tRFS) before actually 
        acknowledging the host's request to pause the transfer? Or would the 
        device simply then proceed to abort the transfer (before command 
        completion) and assert it interrupt with some error condition? 

Reply via email to