This message is from the T13 list server.
Hale, I just don't get it -- this table has always driven me batty. At 05:01 PM 12/6/2001, Hale Landis wrote: >Yes... The PIO timing tables have been this way since ATA-3 and as >far as I can tell not much has changed in those tables since ATA-1 or >ATA-2. The original single table was split into two tables in hopes >that it would make it more clear that the timings for the Data >register are different than for the other Command and Control Block >registers. This has nothing to do with 8-bit data transfers. The >timings for 8-bit data transfers were never specified by ATA-x. It >was only when 8-bit transfers were put back into ATA/ATAPI-x was it >made clear that the PIO timings for the Data register apply to both >16-bit and 8-bit transfers (for devices that support 8-bit data >transfers). > >>I've interpreted that to mean that the host could only use the >>shorter pulse widths for the data register and had to use 290 ns >>for all the other registers. Ya know, this has been a long-standing sticking point with me -- I'd sure like to see actual hardware that could RELIABLY handle all sorts of varying select times beyond their designed limits :) In other words, just what kind of electronics & specs & design would handle the same exact signals being valid with different timings, yet the target hardware is supposed to figure out what is valid?? UDMA works, because it is the "SELECT LINE" that holds the *register hardware* from "selecting" once the transfer starts, and blocks the register hardware from "listening" to the signals. BUT, this is not true in the case of talking to 1F0 vs. 1F1-1F7 !! The PIO timing table should be simplified. I think you'd have to be an idiot to ACTUALLY DESIGN HARDWARE that wouldn't handle the select times of the fastest PIO transfer rate you wish to correctly transfer! (i.e. if you want to transfer mode 4, then the entire select chain in your hardware MUST handle mode 4 timing!) So, why the table that breaks this out as if different registers can have different timing? Besides, if the buffer cannot handle the highest data rate, then you are depending on the pulling of IORDY anyway. Yet, the actual select logic front-end MUST be able of handling the fastest expected timing ON THE BUS. Again -- let me state this again: Suppose I design TWO chains of logic. The first is designed to select on the detection of 1F0 only. It is extremely fast logic. For the sake of argument, even though this is not IDE timing, I'll make the logic so it can handle 500Mhz pulses. I design the second chain, and it is designed to detect ONLY 1F1-1F7, and it can only handle pulses at 2Khz, because of EXTREMELY SLOW logic. There now..the inputs are common at the bus, and the outputs of these two chains are supposed to drive their respective pieces of logic on the disk drive. Alright now. I send pulses fast enough for the 1F0 logic (500Mhz) down these signal lines. TELL ME HOW THE 1F1-1F7 logic is suppose to treat these same signals that they're connected to, and what it's supposed to do, especially when the signalling is VIOLATING THE MIN TIMING REQUIREMENTS everywhere!!!??? As far as the 1F1-1F7 logic, these pulses are "noise" or "glitches"! Wouldn't the 1F1-1F7 logic randomly select with random contents loading into our registers? Pretty reliable, eh?? What I'm complaining about is that this spec (on this particular table) has always perpetuated the illusion of what I call "phantom" hardware --> hardware that really cannot ever be built and reliably interoperate. So, engineers largely ignore it, and design for the fastest anyways. So why the misleading table? *I* would rather see a table that would detail requirements for the VALID INTERCONNECTION of drives (what would and not work together). Just a comment... -ron Subscribe/Unsubscribe instructions can be found at www.t13.org.
