On 10/04/2012 10:47 AM, Peter Maydell wrote: >>> I'd make address_space_* use uint64_t instead of target_phys_addr_t >>> for the address. It may actually be buggy for 32 bit >>> target_phys_addr_t and 64 bit DMA addresses, if such architectures >>> exist. Maybe memory.c could be made target independent one day. > > I thought target_phys_addr_t was supposed to be "widest necessary > address", so an architecture with 64 bit DMA addresses should be > implemented with 64 bit target_phys_addr_t...
Yes. Since even before this patchset all DMA went through the memory API (and before the memory API, through cpu_register_physical_memory), this case was already broken. > >> We can make target_phys_addr_t 64 bit unconditionally. The fraction of >> deployments where both host and guest are 32 bits is dropping, and I >> doubt the performance drop is noticable. > > I benchmarked it on ARM when I switched ARM to always 64 bit physaddrs; > it was just about measurable (maybe half a percent or so) but not > significant (we don't use this type in the fast path for memory > accesses, only in the slow/IO path). > > I'd favour just moving everything to 64 bits (the remaining > 32 bit targets are: cris, lm32, m68k, microblaze, or32, sh4, unicore32, > xtensa). However it does require some review of devices to check that > they're not using target_phys_addr_t when they meant uint32_t [eg > in registers for DMA devices] or relying on 32 bit wraparound. I posted a patch. I did no review, the cost/benefit tradeoff is horrible IMO, especially with me not knowing the details. If something breaks, git bisect will save the day. -- error compiling committee.c: too many arguments to function