> >> Have I been the victim of Intel hype?  They make hardware that would
> >> make it possible to have enough interrupts for each PCI card slot to
> >> have 4 interrupts or at least to have 24 hardware interrupts available
> >> to be assigned with a cross point.
> >
> > On a PCI bus, there are four interrupt lines.  All four are connected
> > to each slot, but in a different order.
>
> Yes, that was one design but it is also possible to use PNP to assign
> interrupt numbers.

Not really. "PnP" allows the OS to figure out the interrupt routing without 
requiring intrinsic knowledge of the particular hardware.

Unlike memory mappings, The OS generally has no control over interrupt 
routing, it has to cope with whatever it is given. The "Interrupt line" field 
in the PCI config space has no actual meaning. It's merely a convenient space 
for communication between the POST software (ie. PCI BIOS) and the real OS.
The "Interrupt Pin" field is readonly, and specifies which PCI IRQ pin the 
device is wired to.

I don't think the details of x86 CPU interrupt handling (APIC vs. legacy PIC) 
are particularly relevant here.

> IIUC, shared PCI interrupts are handled by a Kernel driver.  It is
> transparent to the device driver for the PCI card.

The generic PCI code may be able to narrow down the list of possible devices 
based on the interrupt line. However there is no way for the PCI code to tell 
which device raised an interrupt on a shared line. The only way of for the 
device specific driver to check.

In the early days of, when machines typically only has a couple of PCI 
devices, there were many broken device drivers that did not cope with shared 
PCI interrupts. Now that machines have dozens of PCI devices these have 
mostly been fixed.

There are a wide variety of PCI bus topologies and IRQ routing strategies. 
When you're designing a PCI device you have to assume you're on a shared 
line, as that's still the common case for commodity hardware. Worst case is 
that you're stuck behind a PCI bridge with several other devices and only a 
single upstream IRQ line.

PCIe is a bit different because it uses message signalled interrupts rather 
than physical interrupt pins. I don't know the details, but I guess in this 
case the host can allocate IRQs arbitrarily, and ensure each interrupt is 
uniquely identifiable.

> > Where "single function device" is a well defined PCI term. A
> > multifunction device is when you have multiple independent PCI devices on
> > a single physical card. eg. dual channel SCSI controllers or single-board
> > SLI graphics cards.
>
> Actually, we have multiple devices here.  Video controller and DMA
> controller.

Really? I'd expect everything to be a single PCI device. DMA controllers only 
tend to exist as separate entities on systems where the normal devices can't 
be bus-masters. You may implement it as a separate functional block in the 
FPGA, but the host system doesn't know or care about that.

Paul
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to