On Fri, 15 Feb 2019 15:48:43 +0100 Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 15/02/19 12:33, Igor Mammedov wrote: > > On Thu, 14 Feb 2019 19:11:27 +0100 > > Paolo Bonzini <pbonz...@redhat.com> wrote: > > > >> On 14/02/19 15:07, Igor Mammedov wrote: > >>> Also some boards (ab)use memory_region_allocate_system_memory(), calling > >>> it several > >>> times to allocate various fixed sized chunks of RAM and ROMs, which is > >>> problematic > >>> to map to a single initial RAM Machine::memdev backend and is currently > >>> broken if > >>> -mem-path points to a not hugepage pool. > >> > >> This is certainly a good idea. However, I'm not sure why you would need > >> a memdev property on the Machine instead of just allowing 1 -numa node, > >> which is what really is. > > > > using '-numa node' would be confusing to user when he/she is not interested > > in numa usecase > > it also would enable numa fdt/acpi parts generated automatically (fixable > > but then again > > it adds more to confusion) and in the end there are boards that do not > > support numa at all > > (s390x). > > Fair enough. > > What about -m, too? Then you'd specify a memdev instead of the initial > memory size. that somewhat what I've planned, make -m X translate into -object memory-backend-ram,id=magically-get-what-board-uses-now,size=X -machine memdev=thatid one more reason for memdev vs device is that -numa now uses memdevs and so far it doesn't look that non-numa initial RAM would get immediate benefits from using -device on most boards (well, I couldn't come up with any modulo backend/frontend consistent usage) > > Paolo >