On Thu, Oct 4, 2012 at 5:19 PM, Avi Kivity <a...@redhat.com> wrote: > On 10/04/2012 07:13 PM, Blue Swirl wrote: >> On Thu, Oct 4, 2012 at 6:38 AM, Avi Kivity <a...@redhat.com> wrote: >>> On 10/03/2012 10:24 PM, Blue Swirl wrote: >>>> > >>>> > #else >>>> > -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, >>>> > - int len, int is_write) >>>> > + >>>> > +void address_space_rw(AddressSpace *as, target_phys_addr_t addr, >>>> > uint8_t *buf, >>>> > + int len, bool is_write) >>>> >>>> I'd make address_space_* use uint64_t instead of target_phys_addr_t >>>> for the address. It may actually be buggy for 32 bit >>>> target_phys_addr_t and 64 bit DMA addresses, if such architectures >>>> exist. Maybe memory.c could be made target independent one day. >>> >>> We can make target_phys_addr_t 64 bit unconditionally. The fraction of >>> deployments where both host and guest are 32 bits is dropping, and I >>> doubt the performance drop is noticable. >> >> My line of thought was that memory.c would not be tied to physical >> addresses, but it would be more general. Then exec.c would specialize >> the API to use target_phys_addr_t. Similarly PCI would specialize it >> to pcibus_t, PIO to pio_addr_t and DMA to dma_addr_t. > > The problem is that all any transition across the boundaries would then > involve casts (explicit or implicit) with the constant worry of whether > we're truncating or not. Note we have transitions in both directions, > with the higher layer APIs calling memory APIs, and the memory API > calling them back via MemoryRegionOps or a new MemoryRegionIOMMUOps. > > What does this flexibility buy us, compared to a single hw_addr fixed at > 64 bits?
They can all be 64 bits, I'm just considering types. Getting rid of target_phys_addr_t, pcibus_t, pio_addr_t and dma_addr_t (are there more?) may be also worthwhile. > > > -- > error compiling committee.c: too many arguments to function