Il 11/12/2013 16:36, Igor Mammedov ha scritto: >> > -object memory-ram,size=1024M,policy=membind,host-nodes=0,id=ram-node0 \ >> > -numa node,nodeid=0,cpus=0,memdev=ram-node0 \ >> > -object >> > memory-ram,size=1024M,policy=interleave,host-nodes=1-3,id=ram-node1 \ >> > -numa node,nodeid=1,cpus=1,memdev=ram-node1 > > I was thinking about a bit more radical change: > -object memory-ram,size=1024M,policy=membind,host-nodes=0,id=ram-node0 > -device dimm,memdev=ram-node0,node=0 > -object memory-ram,size=1024M,policy=membind,host-nodes=1,id=ram-node1 > -device dimm,memdev=ram-node1,node=1 > > that would allow to avoid synthetic -numa option
You still need -numa for cpus, at least in the short/medium-term. > but would require conversion > of initial RAM to dimms. That would be more flexible for example allowing > bind several backends to one node (like: 1Gb_hugepage + 2Mb_hugepage ones) Yes, that's another possibility, perhaps even cleaner. With "-numa node,memdev=", your board only needs to use memory_region_allocate_system_memory in order to apply a NUMA policy to guest RAM; it also works with guests or boards that are not themselves NUMA-aware. Doing the same with "-device dimm" requires changing the board to support RAM-as-dimms; do you have any idea how easy/hard that would be? Paolo
