> On Apr 9, 2015, at 3:44 AM, Kengo NAKAHARA <[email protected]> wrote: > > Hi, > > I implement x86 MD MSI/MSI-X support code. Here is the patch: > http://www.netbsd.org/~knakahara/md-msi-msix/x86-md-msi-msix.patch > > Furthermore, here is the usage example of if_wm: > http://www.netbsd.org/~knakahara/md-msi-msix/if_wm-msi-msix.patch > > I believe this MD implementation help to decide the MI APIs. > One of MI MSI/MSI-X API is [email protected]'s bus_msi(9), > http://mail-index.netbsd.org/tech-kern/2014/06/06/msg017209.html > another is [email protected]'s API, > http://mail-index.netbsd.org/tech-kern/2011/08/05/msg011130.html > The other is mine, above patch includes my API manual. > > I want feedback from various device driver MSI/MSI-X implementations, > such as usability, portablity, performance, and potential issues. > So, I would like to commit above patch. If no one opposites, I commit > above patch after one or two weeks. > # Of course, I will commit by dividing each component. > > Could you comment this implementation?
PCI_MSI_MSIX should be __HAVE_PCI_MSI_MSIX void *vih; That should be pci_msix_handle_t or pci_msi_handle_t unless you want still use pci_intr_handle_t. + mutex_enter(&cpu_lock); + error = intr_distribute(vih, affinity, NULL); + mutex_exit(&cpu_lock); Why the cpu_lock? shouldn't intr_distribute handle that itself? This should a pci_intr_distribute and be done before the establish, not after. Why do you assume you will only allocate one slot per msi/msix call? Some device will use multiple contiguous slots so you have to alllocate them as a set.
