On Thu, 2008-04-24 at 22:57 -0500, James fowler wrote:
> Andy, thanks for the response. It was very help for and informative.
> 
> > > Also is there not a better way to troubleshoot pci-latency other than by
> > > trail and error?
> >
> > Analysis of requirements for the PCI devices and bridges connected to the
> > bus.
> >
> I can see that this is something a normal user would find hopeless to deal 
> with. 

I spent about a week total, reading on the internet, running down the
specs, and then taking some time to read and understand the parts of the
PCI spec I cared about.  I'm an Electrical Engineer, so the document was
tractable for me.  God help anyone who doesn't have the background.

> I am taking it from this posting that what probably could have worked was 
> leaving the bios setting at the default. And increasing it for the pvr250.

Yup.  Especially, if your Firewire interface is actually busy.


> When I first booted up was that the firewire and pvr250 were both being set 
> to 
> 64, by their drivers. So increasing the setting for the pvr250 would have 
> solved it also once I guessed through trail and error the correct setting. 

Probably setting the Firewire interface down to 48 or 32 would have
helped too, especially if you don't rely heavily on it.

> > > Is it possible that the default of 64 is still a little low?
> My question on the default setting was if it were correct in the general 
> since. To me that a default setting of 96 as opposed to 64, could probably 
> keep the majority of issues related to this down.

It really depends on your system.  96 cycles means that another device
may sometimes not appear responsive or may lose data because it can't
grab the bus.

The PCI bus with a standard round-robin arbiter isn't a fantastic setup
for hard real-time applications.  I found this paper by Sebastian
Schonberg instructive:

http://os.inf.tu-dresden.de/papers_ps/schoenberg-phd.pdf

If you google for "Sebastian Schonberg" name, you'll find a lot of IEEE
papers on PCI and Real-time systems.

Once you've read his dissertation, the PCI bus spec is here:

http://rm-f.net/~orange/devel/specifications/pci/pci_lb3.0-2-6-04.pdf

It's less instructive, but, of course, more authoritative.


> 
> I will test setting the bios back to the default. then setting the pvr250 to 
> 96 decimal and see how that does.

If you don't rely on Firewire for real-time capture of stuff or large
data block moves, that will probably be fine.


> On Thursday 24 April 2008 05:50:39 pm Andy Walls wrote:
> > James fowler wrote:
> > > I have the following system config:
> > > nforce 780i SLI Motherboard
> > > PVR-250 card
> > > Fedora 8 with all  updates.
> > >
> > > I had a real struggle getting this card to work in a stable way. I was
> > > wanting to get some other peoples thoughts on this.  I have had this card
> > > for a long time, just so you know.
> > >
> > > First I started out with Fedora 7 all updates applied.  The card gave me
> > > multiple errors including the dread DMA timeouts. The others were when
> > > watching TV on it, I would get the buffer's full errors, and application
> > > not reading fast enough, blah, blah, blah. Video play back quality was
> > > not very good to say the least.
> > >
> > > Finally, upgrade to Fedora 8 which yum updates install kernel version
> > > 2.6.24.4-64.fc8.  One set of problems went away. The video play back
> > > looked great no longer getting the buffer full errors. However, the DMA
> > > timeouts remained, usually occurring fairly quickly within around 10 mins
> > > or so.
> > >
> > > Went through all the how to's and troubleshooting, and finally started
> > > playing with the pci latency settings.  Through MUCH trail and error I
> > > finally am using the following settings:
> > >
> > > In the bios I have the default set to 176 for the pci latency.
> >
> > That means that any card the BIOS sets up gets to grab and hold the PCI
> > bus segment, that it is on, for a maximum of
> >
> > 176 PCI bus cycles / 33 MHz = 5.33 usecs
> >
> > before having to yield the bus.  That is, if it has that much data to
> > send.
> >
> > That implies two things:
> > a) in a burst this card can send maximally send (174*4) = 696 bytes
> > (maybe a little less).
> >
> > b) other transactions on that bus segment have to wait while this burst
> > is going on.  No other PCI device can use that bus segment while that
> > transfer is happening.
> >
> >
> > PCI latency timers are a pain in some ways: another setting that most
> > users have no good info on how to tweak to optimize their system
> > performance.  But the timers do keep the I/O bus from hanging
> > indefinitely.
> >
> > Remember the good old days of ISA with the CPU directly connected to the
> > I/O bus, and I/O cards never had to timeout their use of the bus?  I
> > painfully remember the cheap SCSI card I had that could hang the whole
> > machine at times.
> >
> > > In my rc.local startup I am using the following setting:
> > > /sbin/setpci -v -s 05:09.0 latency_timer=80
> > >
> > > Obviously the default of 64 never worked here. And the bios default was
> > > the standard 32.
> > >
> > >
> > >
> > >
> > > The only PCI card in the system is the PVR-250. And according to the
> > > lspci output the only pci device listed with the latency of 176 is the
> > > intergrated firewire adapter.
> >
> > By design, the latency timer is a way for the user to control response
> > times of all the PCI devices in the system to guarantee a minimum
> > response time to I/O to a particular device.
> >
> > Most devices have a minimum time (specified by MIN_GRANT in units of 8
> > PCI bus cycles: 242 nsec or ~.25 usec) that they must be allowed on the
> > bus to do anything useful.  The latency timer for a device should be no
> > lower than this minimum time.
> >
> > Typically, but not in every case, the PCI bridges implement round-robin
> > arbiters, so there's only so much control the latency timer can give the
> > user.  The PCI spec doesn't specify any particular arbitration scheme,
> > but does require it to be "fair", whatever that means.
> >
> > Not that MAX_GRANT on some devices is a squirrely number, sometimes
> > lower than MIN_GRANT.  In this case the latency timer can get set to
> > MIN_GRANT not MAX_GRANT, but the value of MAX_GRANT could be used to
> > influence the PCI bus arbitration scheme I guess...
> >
> > > All other intergrated devices I am fairly certain are PCI-Express.
> > > So the question is why did I even have this problem?
> >
> > Well, something may have been hogging the bus.  When a device's latency
> > timer expires, it must yield the bus.  But it can get it right back
> > again from the arbiter, if no other device is asking for it at the time.
> >
> > Also the PCI spec allows something called posted writes to memory mapped
> > IO (MMIO).  This means that writes can be "posted" into a bridge, not
> > having reached their final destination yet, and the CPU is informed that
> > the write cycle is done.  If the CPU, starts some sort of timer without
> > making sure the write actually completed to the final destination, than
> > its event timing will be off.
> >
> > The simple way to force any potentially posted write to complete, is to
> > read back from the device (unless perhaps the MMIO is marked prefetch).
> > It may be worth a code inspection of the ivtv driver to identify and
> > evaluate where it may be the case that a read back after a write to MMIO
> > is required, due to some timer starting, predicated on the completion of
> > a write to MMIO.
> >
> > > Also is there not a better way to troubleshoot pci-latency other than by
> > > trail and error?
> >
> > Analysis of requirements for the PCI devices and bridges connected to the
> > bus.
> >
> > > Is it possible that the default of 64 is still a little low?
> >
> > Sure.  If you can't transfer a buffer from card to host in that many
> > cycles.  At four bytes per cycle, and a maximum of 62 cycles, that's 248
> > bytes.
> >
> > I'd actually bump up the timer a little from this to ensure 256 bytes
> > could be transferred.  You should take into account a maximum allowable
> > target setup of 16 clock, plus 2 clocks for address phase and
> > turnaround.
> >
> > So that's 64 + 16 (worst case on initial setup) + 2 = 82.  Hmmm, but
> > we're typically only allowed to use multiples of 8, so let's ignore the
> > worst case, and round down to 80.  Oh, wait.... ;)
> >
> > > I am tempted to try a setting of latency_timer=60 just to test it. But I
> > > am kinda tired of messing with it for now. I spent almost two days on
> > > this.
> >
> > Don't bother for the PVR-250.  You may want to set the FireWire device's
> > latency timer lower to get it off the bus faster, if the PVR-250
> > functions are more critical to you.
> >
> > -Andy
> >
> >
> > _______________________________________________
> > ivtv-users mailing list
> > [email protected]
> > http://ivtvdriver.org/mailman/listinfo/ivtv-users
> 
> 
> 
> _______________________________________________
> ivtv-users mailing list
> [email protected]
> http://ivtvdriver.org/mailman/listinfo/ivtv-users
> 


_______________________________________________
ivtv-users mailing list
[email protected]
http://ivtvdriver.org/mailman/listinfo/ivtv-users

Reply via email to