On Wed, May 21, 2025 at 11:36 AM Paolo Bonzini <bonz...@gnu.org> wrote:
>
>
>
> Il mer 21 mag 2025, 10:21 Zhao Liu <zhao1....@intel.com> ha scritto:
>>
>> I also realize that once FlatRange/FlatView is associated with 
>> GuestMemoryRegion/
>> GuestMemory, it changes the usual practice in QEMU, where most memory 
>> operations
>> are built around MemoryRegion/AddressSpace.
>
>
> That shouldn't be a problem. In QEMU and vm-memory DMA always starts from 
> Address space/GuestAddressSpace, not from MemoryRegion, so if QEMU implements 
> GuestAddressSpace in qemu_api::AddressSpace everything matches well. The only 
> difference is that Rust code will do something like
>
>   AddressSpace::MEMORY::memory().read(...)
>
> (which retrieves the FlatView) instead of
>
>   address_space_read(&address_space_memory, ...)
>
> But that's just how the API is defined. It seems good to me. The mismatch 
> between MemoryRegion and GuestMemoryRegion is confusing, but will be mostly 
> hidden behind the prelude because Guest* are traits not structs.
>
>> > So... not sure what to do there.  It seems like vm-memory is very close to
>> > being usable by QEMU, but maybe not completely. :(
>>
>> Is it possible or necessary for vm-memory to support overlap? Because I
>> feel that if it is possible, the problem might be simplified. (As a
>> beginner, I have yet to understand exactly how difficult it is.)
>
>
> I don't think that's necessary. Just like in QEMU C code we have AddressSpace 
> for DMA and MemoryRegion for hierarchy, in Rust code you have 
> qemu_api::{AddressSpace,MemoryRegion}. FlatView, FlatRange, 
> MemoryRegionSection are hidden in both cases, and users don't care much about 
> which type implements GuestMemoryRegion because all they see is AddressSpace. 
> Again, it's all hidden behind the prelude.
>
> The real problem is how hard it is to remove the references from the 
> vm-memory API... Maybe not much.
>
> Paolo
>
>>
>> Thanks,
>> Zhao
>>
>>

vm-memory is a very rigid API unfortunately. It's excellent for
rust-vmm purposes. I presume it's possible to figure out a clever
solution to satisfy both rust-vmm and QEMU use needs but I'm not sure
it's worth it. It's really hard to retrofit other projects into
vm-memory if they don't use rust-vmm crates API design and it might
make both rust-vmm code and QEMU code more complex. QEMU would depend
on rust-vmm architectural decisions and vice-versa. The thing I'm
fearing most is needing to refactor memory APIs in QEMU in the future
and turn the vm-memory dependency into technical debt.

Perhaps it's more sensible to not use external dependencies to wrap
over our APIs but we can surely design our Rust bindings inspired by
them. I think it's an inescapable consequence of QEMU's internals
being fluid over time and "private"/unstable.

Personal anecdote: I tried using vm-memory on a personal TCG-like
emulator I am writing for fun, and I found it a daunting task as new
rust-vmm concepts appeared into my codebase as "scope creep". And I
wasn't even adapting an existing API to vm-memory, but designing a new
one based on it. I gave it up after a few days.

-- 
Manos Pitsidianakis
Emulation and Virtualization Engineer at Linaro Ltd

Reply via email to