On 12/12/08, Andrea Arcangeli <[email protected]> wrote:
> From: Andrea Arcangeli <[email protected]>
>
> One major limitation for KVM today is the lack of a proper way to write
> drivers
> in a way that allows the host OS to use direct DMA to the guest physical
> memory
> to avoid any intermediate copy. The only API provided to drivers seems to be
> the cpu_physical_memory_rw and that enforces all drivers to bounce and trash
> cpu caches and be memory bound. This new DMA API instead allows drivers to
> use
> a pci_dma_sg method for SG I/O that will translate the guest physical
> addresses
> to host virutal addresses and it will call two operation, one is a submit
> method and one is the complete method. The pci_dma_sg may have to bounce
> buffer
> internally and to limit the max bounce size it may have to submit I/O in
> pieces
> with multiple submit calls.
>
> All we care about is the performance of the direct path, so I tried to
> avoid dynamic allocations there to avoid entering glibc.
>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> + * QEMU PCI DMA operations
> +typedef struct QEMUPciDmaSgParam {
> + QEMUPciDmaSgSubmit *pci_dma_sg_submit;
> + QEMUPciDmaSgComplete *pci_dma_sg_complete;
> + void *pci_dma_sg_opaque;
> + QEMUPciDmaSg *sg;
> + struct QEMUPciDmaSgParam *next;
> +} QEMUPciDmaSgParam;
Still "PCI" here and other places, why? Even the "pci_dev" should be
bus_opaque for other buses.
+/* pci_dma.c */
Here pci_ prefix is even incorrect given that the file is now dma.c.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html