On Fri, Nov 08, 2019 at 12:07:14PM +0100, David Hildenbrand wrote: > For a specific CPU model, we have a lot of feature variability depending on > - The microcode version of the HW > - The hypervisor we're running on (LPAR vs. KVM vs. z/VM) > - The hypervisor version we're running on > - The KVM version > - KVM module parameters (especially, "nested=1") > - The accelerator > > Our default models are migration safe, however can only be changed > between QEMU releases (glued to QEMU machine). This somewhat collides > with the feature variability we have. E.g., the z13 model will not run > under TCG. There is the demand from higher levels in the stack to "have the > best CPU model possible on a given accelerator, firmware and HW", which > should especially include all features that fix security issues. > Especially, if we have a new feature due to a security flaw, we want to > have a way to backport this feature to older QEMU versions and a way to > automatically enable it when asked. > > This is where "best" CPU models come into play. If upper layers specify > "z14-best" on a z14, they will get the best possible feature set in that > configuration. "best" usually means "maximum features", besides deprecated > features. This will then, for example, include nested virtualization > ("SIE" feature) when KVM+HW support is enabled, or fixes via > microcode updates (e.g., spectre) > > "best" models are not migration safe. Upper layers can expand these > models to migration-safe and static variants, allowing them to be > migrated. > > Signed-off-by: David Hildenbrand <da...@redhat.com>
Makes sense to me, and the code looks good. I just have one question below: > --- [...] > +static void s390_best_cpu_model_initfn(Object *obj) > +{ > + const S390CPUModel *max_model; > + S390CPU *cpu = S390_CPU(obj); > + S390CPUClass *xcc = S390_CPU_GET_CLASS(cpu); > + Error *local_err = NULL; > + int i; > + > + if (kvm_enabled() && !kvm_s390_cpu_models_supported()) { > + return; > + } > + > + max_model = get_max_cpu_model(&local_err); > + if (local_err) { > + /* we expect errors only under KVM, when actually querying the > kernel */ > + g_assert(kvm_enabled()); > + error_report_err(local_err); > + return; > + } > + > + /* > + * Similar to baselining against the "max" model. However, features > + * are handled differently and are not used for the search for a > definition. > + */ > + if (xcc->cpu_def->gen == max_model->def->gen) { > + if (xcc->cpu_def->ec_ga > max_model->def->ec_ga) { > + return; > + } > + } else if (xcc->cpu_def->gen > max_model->def->gen) { > + return; > + } What exactly is expected to happen if we return from the function here? (In x86, we worked around the inability to report errors inside instance_init by adding another step to CPU object initialization called "CPU expansion", implemented by x86_cpu_expand_features().) > + > + /* The model is theoretically runnable, construct the features. */ > + cpu->model = g_new(S390CPUModel, 1); > + cpu->model->def = xcc->cpu_def; > + bitmap_copy(cpu->model->features, xcc->cpu_def->full_feat, > S390_FEAT_MAX); > + > + /* Mask of features that are not available in the "max" model */ > + bitmap_and(cpu->model->features, cpu->model->features, > max_model->features, > + S390_FEAT_MAX); > + > + /* Mask off deprecated features */ > + clear_bit(S390_FEAT_CONDITIONAL_SSKE, cpu->model->features); > + > + /* Make sure every model passes consistency checks */ > + for (i = 0; i < ARRAY_SIZE(cpu_feature_dependencies); i++) { > + if (!test_bit(cpu_feature_dependencies[i][1], cpu->model->features)) > { > + clear_bit(cpu_feature_dependencies[i][0], cpu->model->features); > + } > + } > +} [...] -- Eduardo