On Tue, Apr 29, 2008 at 06:12:51PM -0500, Anthony Liguori wrote: > IIUC PPC correctly, all IO pages have corresponding struct pages. This > means that get_user_pages() would succeed and you can reference count them? > In this case, we would never take the VM_PFNMAP path.
get_user_pages only works on vmas where only pfn with struct page can be mapped, but if a struct page exists it doesn't mean get_user_pages will succeed. All mmio regions should be marked VM_IO as reading on them affects hardware somehow and that prevents get_user_pages to work on them regardless if a struct page exists. > That's independent of this patchset. For non-aware guests, we'll have to > pin all of physical memory up front and then create an IOMMU table from the > pinned physical memory. For aware guests with a PV DMA window API, we'll > be able to build that mapping on the fly (enforcing mlock allocation > limits). BTW, as far as linux guest is concerned, if the PV DMA API mlock ulimit triggers the guest will crash. Nothing checks when pci_map_single returns null (the fix would be to throttle the I/O until some other dma is completed and to split the dma in multiple operations if it's a SG entry and if it repeteadly fails to fallback to PIO or return an IO error if PIO isn't available). It can fail if there's lots of weird pci hardware doing rdma at the same time (for example see iommu_arena_alloc retval in arch/alpha/kernel/pci_iommu.c). In short we'll either need ulimit -l unlimited or we'll have to define practical limits so depending on the guest driver code and number of devices using passthrough. I'll make the reserved-ram patch incremental with those patches, then it should pick the right pfn coming from /dev/mem without my page_count == 0 check, and then I've only to fixup the page pinning (so likely it'll also be incremental with the kvm mmu notifier patch so I can hope to get something final and remove page pinning for good not only on mmio regions that don't have a struct page). I've currently troubles with the blk-settings.c change done in 2.6.25 to boot in the host, I thought I fixed that already...(I did when loading the host kernel in kvm, but on real hardware it fails still for another reason). And Andrew sent me a large email about mmu notifiers, so before I return on the reserved-ram I've to answer him and upload an updated mmu-notifier patch with certain cleanups he requested, so go ahead ignoring the reserved-ram and mmu notifiers, I'll pick whatever is available in or outside kvm.git when I'm ready. Thanks! ------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel