> On Friday, 14th July 2000, "Rodney W. Grimes" wrote:
> >>  I suspect an interaction between the ATA driver and VIA chipsets,
> >> because other than the network, that's all that is operating when I see
> >> the underruns.  And my Celeron with a ZX chipset is immune.
> >
> >I've seen them on just about everything, chipset doesn't seem to matter,
> >IDE or SCSI doesn't seem to matter.
> Well, maybe they are just a fact of life.  But using just my vague knowledge
> of how PCI works, it doesn't look inevitable to me.  So I see bugs. :-)

Yes, there are bugs, it's in the poor specification of the PCI bus, and
in the even poorer implementation of PCI in hardware.  To qoute from the
PCI 2.0 spec, starting at the bottom of page 44, section
Latency Guidelines:

    In most PCI systems, typical access latency is both short (likely
    under 2us) and easily quantified.  However, worst case latency
    (however rare) may not only be quite long, but in some cases quite
    difficult to predict.  For example, latency to a standard expansion
    adapter (ISA/EISA/MC) through a bridge is often a function of adapter
    behavior, not PCI behavior.  (This is especially problematic since
    some existing adapters are not compliant with latency parameters
    defined by the associated bus standard.)  To compensate, masters
    that require guaranteed worst case access latency must provide adequate
    buffering for 30 microseconds.  This implies a minimum of about 50 bytes
    of buffering for a 10Mbit/second LAN, and about 500 bytes for a
    100Mbit/second LAN.  (If the buffers are line organized [i.e., 16- or
    32-bit aligned] to imporove PCI and target memory utilization, minimum
    buffer size likely increases.)  In spite of worst case uncertainty,
    30 microseconds should provide sufficient margin for realizable system

My calculations say that 30uS is long enough to transfer about 3960Bytes,
now you see the problem???

I think the current driver behavior is near optimal, it backs down until
it becomes latency proof (store and forward is latency proof).  The only
thing it might do better is deal with the fact that short term bus starvation
should not effect long term performance, and as long as the underun events
have a tolerable frequence it should not down grade to store and forward.

Right now the code immediately steps the TXTHRESH every time we get an
underrun, this should probably use a frequency counter and not do this
unless we are seeing some untolerable rate of underruns.  Especially
when makeing the transition to store and forward.

Ohh... and a finally note, DEC blew the chip design by only including
a 160byte threshold point given that PCI 2.0 spec says it should have
been 500bytes!!  (Well, they blew it when the did the DC2114x enhancement
to the the DC2104x chip by not increasing the fifo depth to compensate
for the higher rate at which the fifo is emptied.)

> >> Getting even more technical, it appears to me that the current driver
> >> instructs the 21143 to poll for transmit packets (ie a small DMA)
> >> every 80us even if there are none to be sent.  I don't know what percentage
> >> of bus time this might be, or even how to calculate it (got some time Rod?)
> >
> >I'll have to look at that.  If it is a simple 32 bit read every 80uS
> >thats something like .1515% of the PCI bandwidth, something that shouldn't
> >matter much.  (I assumed a simple 4 cycle PCI operation).  Just how big
> >is this DMA operation every 80uS?
> I believe it is just one 32 bit read.  But I don't understand that aspect
> of the hardware very well yet.  I also suspect that this polling adds
> to the latency, but again, I haven't got to the end of that either.
> Sometimes other things can distract you from even the most interesting
> technical matter. :-)


Rod Grimes - KD7CAX @ CN85sl - (RWG25)               [EMAIL PROTECTED]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to