This message is from the T13 list server.

I am responding to several comments in several of Pat's emails. His
comments are preceded by '>'.

This message is long. If you respond/reply, please don't reproduce
the entire thing in your message. And please don't explictly copy me
on your reply... I am on the T13 reflector... I don't need extra
copies of you reply message to sort through.

First some definitions:

* X-to-ATA/ATAPI bridge: Some sort of device that enables the
attachment of ATA or ATAPI devices to a 'system'. On the X side the
'system' thinks the bridge device is really an 'X' device. On the
ATA/ATAPI side the bridge device is an 'ATA host'. The bridge device
is expected to follow accepted design standards and support popular
'systems' and ATA/ATAPI devices (even if that means it must handle
strange 'system' and ATA/ATAPI implementations, this would be a
marketing requirement).

* 'system': Anything that could be an 'X' interface host or iniitator
or master. Most likely it is some computer system with an OS and the
appropriate X OS device driver stack and X interface hardware.

* 'host': An ATA/ATAPI host. The thing that is one the other side of
an ATA cable from the ATA/ATAPI devices.

* 'X': The hardware, software, protocols, etc, that are accepted as
defining an X interface. This could be a parallel port, USB, 1394,
maybe even SCSI. Note that a PCMCIA host adapter is not a bridge
device. It is a device controller. A PCMCIA host adapter does nothing
more than some host memory (or I/O address signal re-mapping and some
signal timing conversion).

Second, we seem to be hung up on X being USB... OK... I know very
little about USB but I think it uses some kind of 'packet' data
transmission protocol to communicate commands, status and data to the
attahced devcies (but that really doesn't matter here).

Third, lets assume that the X interface uses some from of the SCSI
CDB and status reporting as a way to communicate commands and status
to/from an X type device. Lets also assume that X has its own rules
for how data is transferred. Perhaps data must be transmitted in
'packets' of 8, 16, 32, ..., byte packets. Perhaps X understands both
even and odd length data transferrs.

Fourth, lets assume that the ATA/ATAPI interface is the normal 16-bit
interace and only the standard reset and command protocols are used.
Vendor specific device commands are OK as long as they use the
standard protocols. Also lets assume that it might be possible to use
either the ATA PIO or the ATA DMA data transfer protocols with either
ATA commands or with the ATAPI PACKET command.

Now lets get a few of Pat's comments out of the way...

>Myself, I find it difficult to claim an Atapi host 
>must parse the command block given that the Ansi Ata/pi 
>standard doesn't say tell us how to do that.

I think the ATA/ATAPI standards provide enough information for both a
host and a device to understand what each command code does and what
each command parameter means. Yea, it isn't an easy job but the
information exists and I think it is unambiguous.

As for parsing the ATAPI PACKET command block data I assume this data
(usually 12 bytes) looks like some kind of SCSI command (Inquiry,
Read 10, etc) and it is properly padded to the 12 bytes when
necessary.

>Let's begin with a fully transparent bridge that parses 
>the command by the simple means of blindly passing the 
>bytes on to the Atapi device.  The Atapi device reports 
>back in the I/O bit which way to move data and in the 
>x1F5:1F4 Cylinder registers how much data to move.

I do not see how such a bridge device could operate. On the X side
the bridge would need to understand the direction of the data
transfer. I assume that would require parsing the SCSI CDB and/or
having some addition information provided by the X interface command
protocols. Next the brige would need to understand the direction and
type of data transfer to perform on the ATA/ATAPI interface. Without
an understanding of what is expect to happen on the ATA/ATAPI
interface there is no way a host can execute any of the ATA or ATAPI
command protocols (unless you make a bunch of mostly invalid
assumptions about how things should happen).

Now lets talk about that Inquiry command with the allocation length
set to 5 (it seems to be a popular topic here)...

[PIO discussion]

Lets assume that the system thinks it is valid to issue such a
command to an X device (and I know no reason why not). And lets
assume the system understands that it must allocate an appropriately
sized buffer to hold the data packet(s) that will contain the Inquiry
data. That buffer many need to be many times bigger than the 5 bytes
of data that are expected because the X interface protocol may round
all transfers up to the next bigger packet size. Maybe the X
interface supports telling an X device both the expected transfer
size (in this case the allocation length) and also the maximum size
of the host data buffer (maybe 256 bytes in the case).

So the bridge receives this command from the system. It looks at the
SCSI CDB and decides that it is best to use ATA/ATAPI PIO data
transfer. Seeing that this command should transfer 5 bytes, it must
round that up to 6 bytes to account for the pad byte. It also sees
that this is an ATA/ATAPI 'read' command. It follows the ATA/ATAPI
command protocols to select the ATAPI device, send the PACKET command
and send the ATAPI command block (the 12 bytes of the SCSI CDB).
Keeping things simple, lets assume the device sets BSY=0 DRQ=1 and
BC=5 when ready to transfer the data. Being a good ATA host the
bridge knows it must read the ATA Data register 3 times and it knows
the last byte is a pad byte. Now it must re-package that data into an
X interface data packet, perhaps adding many more pad bytes to fill a
X interface data packet and send the data on the the system. (I
assume the system will understand that it asked for only 5 bytes and
will use on the first 5 bytes it receives.) Next the bridge must fake
up appropriate 'good' SCSI status (I assume SK=0, etc) and send that
to the system in a 'status' data packet. 

So far so good?

If the system issued an Inquiry command with an allocation length of
255 (command practice it seems) but he device had on 5 bytes to send
then nothing above would really be different. There would still be
only one DRQ data block of 6 bytes (5 + 1 pad).

[MW DMA discussion]

Now lets assume the bridge receives that same command from the system
and for some reason it decides to use ATA/ATAPI Multiword DMA with
the ATAPI PACKET command. Again it follows the ATA/ATAPI command
protocols to select the device, send the ATAPI command block (the 12
bytes of the SCSI CDB). I assume the bridge will program its ATA DMA
engine to receive 6 bytes from the device. Keeping things simple lets
assume the device asserts DMARQ and DMA engine asserts DMACK and
reads the ATA Data register 6 times. Again, as above, the bridge must
re-package the data, send the data, and send that faked up SCSI
status.

OK so far?

If the bridge programming its DMA engine for a short byte count, say
2 bytes, then we would have a problem. The device would appear to
hang with DMARQ asserted (big buzzer in the sky goes off: BAD HOST). 

If the bridge programs its DMA engine for a large byte count then we
must assume the bridge will understand that when the device has
completed the data transfer (either by polling the device's status or
seeing the device assert INTRQ. We must assume the DMA engine in the
bridge has kept track of the number of bytes transferred... it will
need to know that in order to re-package the data for the X
interface.

OK so far?

And we should be able to extend everything we have said so far about
a 5 byte Inquiry command to a 5 block Read 10 command. The bridge
must send the proper ATAPI PACKET command to the device and the
bridge must program its DMA engine to transfer the proper amount of
data. That data most likely would come in more than one DMA data
burst on the ATA interface. There are no residual/extra/lost data
bytes in this transfer. The last byte of each DMA burst is
immediately followed by the first byte in the next DMA burst. Only at
the end might there be a) for read commands a pad byte from the ATAPI
device, or b) for a write command a few extra bytes send by the
bridge's DMA engine before the DMA engine detected end of DMA burst.
Of course b) would happen only if the bridge failed to program its
DMA engine with the proper transfer length. In case b) the device
would ignore the extra data bytes.

If the system issued an Inquiry command with an allocation length of
255 (command practice it seems) but he device had on 5 bytes to send
then nothing above would really be different. The DMA transfer would
stop after 6 bytes (5 + 1 pad) because the device would deasert DMARQ
and at some point indicate command complete by having status of BSY=0
and DRQ=0 and asserting INTRQ.

[Ultra DMA discussion]

Now lets assume the bridge receives that same command from the system
and for some reason it decides to use ATA/ATAPI Ultra DMA with the
ATAPI PACKET command. Again it follows the ATA/ATAPI command
protocols to select the device, send the ATAPI command block (the 12
bytes of the SCSI CDB). I assume the bridge will program its ATA DMA
engine to receive 6 bytes from the device. Keeping things simple lets
assume the device asserts DMARQ and DMA engine asserts DMACK and the
device clocks over 6 bytes of data. Again, as above, the bridge must
re-package the data, send the data, and send that faked up SCSI
status.

OK so far?

If the bridge programming its DMA engine for a short byte count, say
2 bytes, then we would have a problem. The device would appear to
hang with DMARQ asserted (big buzzer in the sky goes off: BAD HOST). 

If the bridge programs its DMA engine for a large byte count then we
must assume the bridge will understand that when the device has
completed the data transfer (either by polling the device's status or
seeing the device assert INTRQ. We must assume the DMA engine in the
bridge has kept track of the number of bytes transferred... it will
need to know that in order to re-package the data for the X
interface.

OK so far?

And we should be able to extend everything we have said so far about
a 5 byte Inquiry command to a 5 block Read 10 command. The bridge
must send the proper ATAPI PACKET command to the device and the
bridge must program its DMA engine to transfer the proper amount of
data. That data most likely would come in more than one DMA data
burst on the ATA interface. There are no residual/extra/lost data
bytes in this transfer. The last byte of each DMA burst is
immediately followed by the first byte in the next DMA burst. Only at
the end might there be a) for read commands a pad byte from the ATAPI
device, or b) for a write command a few extra bytes send by the
bridge's DMA engine before the DMA engine detected end of DMA burst.
Of course b) would happen only if the bridge failed to program its
DMA engine with the proper transfer length. In case b) the extra
bytes would be included in the DMA burst CRC but would otherwise be
ignored by the device.

If the system issued an Inquiry command with an allocation length of
255 (command practice it seems) but he device had on 5 bytes to send
then nothing above would really be different. The DMA transfer would
stop after 6 bytes (5 + 1 pad) because the device would deasert DMARQ
and at some point indicate command complete by having status of BSY=0
and DRQ=0 and asserting INTRQ.

[Summary]

Gosh, did you notice how the same text got repeated over and over
above. Did you notice how similar the PIO and DMA transfers are? Did
you notice there is no difference between MW DMA and Ultra DMA
(except the CRC thing)?


***  Hale Landis  *** [EMAIL PROTECTED] ***
*** Niwot, CO USA ***   www.ata-atapi.com   ***


Subscribe/Unsubscribe instructions can be found at www.t13.org.

Reply via email to