> +/* Return a new IOVector that's a subset of the passed in IOVector. It > should + * be freed with qemu_free when you are done with it. */ > +IOVector *iovector_trim(const IOVector *iov, size_t offset, size_t size);
Using qemu_free directly seems a bad idea. I guess we're likely to want to switch to a different memory allocation scheme in the future. The comment is also potentially misleading because iovector_new() doesn't mention anything about having to free the vetor. > +int bdrv_readv(BlockDriverState *bs, int64_t sector_num, >... > + size = iovector_size(iovec); > + buffer = qemu_malloc(size); This concerns me for two reasons: (a) I'm alway suspicious about the performance implications of using malloc on a hot path. (b) The size of the bufer is unbounded. I'd expect multi-megabyte transters to be common, and gigabyte sized operations are plausible. At minimum you need a comment acknowledging that we've considered these issues. > +void *cpu_map_physical_page(target_phys_addr_t addr) > + /* DMA'ing to MMIO, just skip */ > + phys_offset = cpu_get_physical_page_desc(addr); > + if ((phys_offset & ~TARGET_PAGE_MASK) != IO_MEM_RAM) > + return NULL; This is not OK. It's fairly common for smaller devies to use a separate DMA engine that writes to a MMIO region. You also never check the return value of this function, so it will crash qemu. > +void pci_device_dma_unmap(PCIDevice *s, const IOVector *orig, This funcion should not exist. Dirty bits should be set by the memcpy routines. Paul ------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Register now and save $200. Hurry, offer ends at 11:59 p.m., Monday, April 7! Use priority code J8TLD2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel