[EMAIL PROTECTED] wrote:
> Zitat von Philippe Gerum <[EMAIL PROTECTED]>:
> 
>> The risk in ironing those PCI locks is to run with hw interrupts   
>> disabled for a
>> long time, inducing pathological latencies, so running RTAI's latency test in
>> the background should help detecting those peaks.
>>
>> However, we may find nothing bad if the kernel uses the MMConfig   
>> access method
>> to the PCI space since this is basically fast mmio there. But since   
>> you seem to
>> be running on x86_32, we may want to check whether BIOS or direct   
>> access to the
>> PCI config does not raise the typical latency too much, as well (I'm  
>>  unsure that
>> PCI_GOBIOS will give us decent results though).
>>
>> To sum up: with different settings for the PCI config access method in "Bus
>> options" (by order of criticality, MMConfig then Direct, then maybe   
>> BIOS), does
>> the latency tool report pathological peaks?
>>
> 
> Hi Philippe
> 
> I played with the different PCI configurations and the results are  
> devastating. Latencies (and jitter) skyrocket after some minutes of  
> testing and peak at several milliseconds. I didn't do the regression  
> with 'normal' INTs though, but that's something up next. Additionally  
> MMCONFIG produced some strange msg at boot.
> 
> I suspect I'm going to use INTs for my current project :(
> 
> Bernhard
> 
> 
> PCI_GOMMCONFIG
> --------------
> 
> I get the following dmesg:
> 
> ACPI: bus type pci registered
> PCI: MCFG configuration 0: base f0000000 segment 0 buses 0 - 31
> PCI: Not using MMCONFIG.
> PCI: Fatal: No config space access function found

Your hw does not seem to have PCI Express support, hence MMConfig access method
is not available. The kernel is expected to downgrade to direct access method.

> [....]
> 
> 
> PCI_GODIRECT
> ------------
> 
> Here desg looks better:
> 
> ACPI: bus type pci registered
> PCI: Using configuration type 1 for base access

Direct access type 1 is done through in/out{b, w, l} commands, which is not
cheap. Well, if it proves that we don't face any unknown issue with direct
access, only getting virtualized masking/unmasking of MSI driven interrupts
would save the day.

-- 
Philippe.

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to