On 9/25/22 13:02, Alex Bennée wrote:
I'm not keen on adding another field like this.

Hmm I thought it was unavoidable from Edgar's comment:

   "CPU's can also have a Master-IDs (Requester IDs) which are unrelated to
   the Clusters CPU index. This is used for example in the Xilinx ZynqMP
   and Xilinx Versal and the XMPU (Memory Protection Units).

   Anyway, I think this approach is an improvement from the current state
   but would rather see a new separate field from requester_id. Overloading
   requester_id will break some of our use-cases (in the Xilinx tree)..."

Of course we don't have to care about external use cases but it seemed
to indicate we might need both.

I missed that one.

What bounds our max number of cpus at the moment?  We use "int" in
struct CPUCore, but that's almost certainly for convenience.

target/s390x/cpu.h:#define S390_MAX_CPUS 248
hw/i386/pc_piix.c:    m->max_cpus = HVM_MAX_VCPUS;

hw/i386/pc_q35.c:    m->max_cpus = 288;

hw/arm/virt.c:    mc->max_cpus = 512;

hw/arm/sbsa-ref.c:    mc->max_cpus = 512;

hw/i386/microvm.c:    mc->max_cpus = 288;

hw/ppc/spapr.c:    mc->max_cpus = INT32_MAX;


Most of these are nicely bounded, but HVM_MAX_VCPUS is a magic number
from Xen, and ppc appears to be prepared for 31 bits worth of cpus.

 From 5642e4513e (spapr.c: do not use MachineClass::max_cpus to limit
CPUs) I think it is being a little optimistic. Even with the beefiest
hosts you start to see diminishing returns by ~12 vCPUs and it won't
take long before each extra vCPU just slows you down.

Ok, I guess. If we decided to add an assert that the cpuid fit in this field, we'd want to revisit this, I think. Not an issue for the moment.

I was confused by the last comment because I forgot the TLBs are not
shared between cores. So I can just bang:

     MemTxAttrs attrs = { .cpu_index = cs->cpu_index };

into arm_cpu_tlb_fill and be done with it?

Yes, it looks like it. I don't see any bulk overwrite of attrs in get_phys_addr and subroutines. Mostly modifications of attrs->secure, as expected.


r~

Reply via email to