Jonathan Cameron <jonathan.came...@huawei.com> writes:
> Used to drive the MPAM cache intialization and to exercise more > of the PPTT cache entry generation code. Perhaps a default > L3 cache is acceptable for max? > > Signed-off-by: Jonathan Cameron <jonathan.came...@huawei.com> > --- > target/arm/tcg/cpu64.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c > index 8019f00bc3..2af67739f6 100644 > --- a/target/arm/tcg/cpu64.c > +++ b/target/arm/tcg/cpu64.c > @@ -711,6 +711,17 @@ void aarch64_max_tcg_initfn(Object *obj) > uint64_t t; > uint32_t u; > > + /* > + * Expanded cache set > + */ > + cpu->clidr = 0x8204923; /* 4 4 4 4 3 in 3 bit fields */ > + cpu->ccsidr[0] = 0x000000ff0000001aull; /* 64KB L1 dcache */ > + cpu->ccsidr[1] = 0x000000ff0000001aull; /* 64KB L1 icache */ > + cpu->ccsidr[2] = 0x000007ff0000003aull; /* 1MB L2 unified cache */ > + cpu->ccsidr[4] = 0x000007ff0000007cull; /* 2MB L3 cache 128B line */ > + cpu->ccsidr[6] = 0x00007fff0000007cull; /* 16MB L4 cache 128B line */ > + cpu->ccsidr[8] = 0x0007ffff0000007cull; /* 2048MB L5 cache 128B line */ > + I think Peter in another thread wondered if we should have a generic function for expanding the cache idr registers based on a abstract lane definition. > /* > * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real > * one and try to apply errata workarounds or use impdef features we > @@ -828,6 +839,7 @@ void aarch64_max_tcg_initfn(Object *obj) > t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */ > t = FIELD_DP64(t, ID_AA64MMFR2, EVT, 2); /* FEAT_EVT */ > t = FIELD_DP64(t, ID_AA64MMFR2, E0PD, 1); /* FEAT_E0PD */ > + t = FIELD_DP64(t, ID_AA64MMFR2, CCIDX, 1); /* FEAT_TTCNP */ > cpu->isar.id_aa64mmfr2 = t; > > t = cpu->isar.id_aa64zfr0; -- Alex Bennée Virtualisation Tech Lead @ Linaro