On Tue, Oct 14, 2025 at 04:24:53PM +0300, Grygorii Strashko wrote:
> On 13.10.25 15:17, Roger Pau Monné wrote:
> > On Tue, Sep 30, 2025 at 12:52:16PM +0000, Grygorii Strashko wrote:
> > > From: Sergiy Kibrik <[email protected]>
> > > +
> > > +   If unsure, say Y.
> > > +
> > >   config MEM_PAGING
> > >           bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
> > >           depends on VM_EVENT
> > > diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> > > index 6ec2c8f2db56..736eb3f966e9 100644
> > > --- a/xen/arch/x86/hvm/Makefile
> > > +++ b/xen/arch/x86/hvm/Makefile
> > > @@ -1,6 +1,6 @@
> > >   obj-$(CONFIG_AMD_SVM) += svm/
> > >   obj-$(CONFIG_INTEL_VMX) += vmx/
> > > -obj-y += viridian/
> > > +obj-$(CONFIG_VIRIDIAN) += viridian/
> > >   obj-y += asid.o
> > >   obj-y += dm.o
> > > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > > index 23bd7f078a1d..95a80369b9b8 100644
> > > --- a/xen/arch/x86/hvm/hvm.c
> > > +++ b/xen/arch/x86/hvm/hvm.c
> > > @@ -701,9 +701,12 @@ int hvm_domain_initialise(struct domain *d,
> > >       if ( hvm_tsc_scaling_supported )
> > >           d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
> > > -    rc = viridian_domain_init(d);
> > > -    if ( rc )
> > > -        goto fail2;
> > > +    if ( is_viridian_domain(d) )
> > > +    {
> > > +        rc = viridian_domain_init(d);
> > > +        if ( rc )
> > > +            goto fail2;
> > > +    }
> > 
> > Are you sure this works as expected?
> > 
> > The viridian_feature_mask() check is implemented using an HVM param,
> > and hence can only be possibly set after the domain object is created.
> > AFAICT is_viridian_domain(d) will unconditionally return false when
> > called from domain_create() context, because the HVM params cannot
> > possibly be set ahead of the domain being created.
> 
> You are right. Thanks for the this catch.
> 
> Taking above into account above, it seems Jan's proposal to convert below
> viridian APIs into wrappers for VIRIDIAN=n case is right way to move forward:
> 
> int viridian_vcpu_init(struct vcpu *v);
> int viridian_domain_init(struct domain *d);
> void viridian_vcpu_deinit(struct vcpu *v);
> void viridian_domain_deinit(struct domain *d);
> 
> Right?

Possibly. If you don't want to introduce a XEN_DOMCTL_createdomain
flag you need to exclusively use the Kconfig option to decide whether
the Viridian related structs must be allocated.  IOW: you could also
solve it by using IS_ENABLED(CONFIG_VIRIDIAN) instead of
is_viridian_domain() for most of the calls here.

The wrapper option might be better IMO, rather than adding
IS_ENABLED(CONFIG_VIRIDIAN) around.

> [1] https://patchwork.kernel.org/comment/26595213/
> 
> > 
> > If you want to do anything like this you will possibly need to
> > introduce a new flag to XEN_DOMCTL_createdomain to signal whether the
> > domain has Viridian extensions are enabled or not, so that it's know
> > in the context where domain_create() gets called.
> 
> In my opinion, it might be good not to go so far within this submission.
> - It's not intended  to change existing behavior of neither Xen nor toolstack
>   for VIRIDIAN=y (default)
> - just optout Viridian support when not needed.

OK, that's fine.

On further request though: if Viridian is build-time disabled in
Kconfig, setting or fetching HVM_PARAM_VIRIDIAN should return -ENODEV
or similar error.  I don't think this is done as part of this patch.

> > 
> > Given that HyperV is available on arm64 also it should be a global
> > flag, as opposed to a per-arch one in xen_arch_domainconfig IMO.
> > >       rc = alternative_call(hvm_funcs.domain_initialise, d);
> > >       if ( rc != 0 )
> > > @@ -739,7 +742,8 @@ void hvm_domain_relinquish_resources(struct domain *d)
> > >       if ( hvm_funcs.nhvm_domain_relinquish_resources )
> > >           alternative_vcall(hvm_funcs.nhvm_domain_relinquish_resources, 
> > > d);
> > > -    viridian_domain_deinit(d);
> > > +    if ( is_viridian_domain(d) )
> > > +        viridian_domain_deinit(d);
> > >       ioreq_server_destroy_all(d);
> > > @@ -1643,9 +1647,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
> > >            && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: 
> > > nestedhvm_vcpu_destroy */
> > >           goto fail5;
> > > -    rc = viridian_vcpu_init(v);
> > > -    if ( rc )
> > > -        goto fail6;
> > > +    if ( is_viridian_domain(d) )
> > > +    {
> > > +        rc = viridian_vcpu_init(v);
> > > +        if ( rc )
> > > +            goto fail6;
> > > +    }
> > >       rc = ioreq_server_add_vcpu_all(d, v);
> > >       if ( rc != 0 )
> > > @@ -1675,13 +1682,15 @@ int hvm_vcpu_initialise(struct vcpu *v)
> > >    fail2:
> > >       hvm_vcpu_cacheattr_destroy(v);
> > >    fail1:
> > > -    viridian_vcpu_deinit(v);
> > > +    if ( is_viridian_domain(d) )
> > > +        viridian_vcpu_deinit(v);
> > >       return rc;
> > >   }
> > >   void hvm_vcpu_destroy(struct vcpu *v)
> > >   {
> > > -    viridian_vcpu_deinit(v);
> > > +    if ( is_viridian_domain(v->domain) )
> > > +        viridian_vcpu_deinit(v);
> > >       ioreq_server_remove_vcpu_all(v->domain, v);
> > > diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
> > > b/xen/arch/x86/hvm/viridian/viridian.c
> > > index c0be24bd2210..1212cc418728 100644
> > > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > > @@ -1116,14 +1116,14 @@ static int cf_check viridian_save_domain_ctxt(
> > >   {
> > >       const struct domain *d = v->domain;
> > >       const struct viridian_domain *vd = d->arch.hvm.viridian;
> > > -    struct hvm_viridian_domain_context ctxt = {
> > > -        .hypercall_gpa = vd->hypercall_gpa.raw,
> > > -        .guest_os_id = vd->guest_os_id.raw,
> > > -    };
> > > +    struct hvm_viridian_domain_context ctxt = {};
> > >       if ( !is_viridian_domain(d) )
> > >           return 0;
> > > +    ctxt.hypercall_gpa = vd->hypercall_gpa.raw;
> > > +    ctxt.guest_os_id = vd->guest_os_id.raw,
> > > +
> > >       viridian_time_save_domain_ctxt(d, &ctxt);
> > >       viridian_synic_save_domain_ctxt(d, &ctxt);
> > > @@ -1136,6 +1136,9 @@ static int cf_check viridian_load_domain_ctxt(
> > >       struct viridian_domain *vd = d->arch.hvm.viridian;
> > >       struct hvm_viridian_domain_context ctxt;
> > > +    if ( !is_viridian_domain(d) )
> > > +        return -EILSEQ;
> > > +
> > >       if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
> > >           return -EINVAL;
> > > @@ -1172,6 +1175,9 @@ static int cf_check viridian_load_vcpu_ctxt(
> > >       struct vcpu *v;
> > >       struct hvm_viridian_vcpu_context ctxt;
> > > +    if ( !is_viridian_domain(d) )
> > > +        return -EILSEQ;
> > 
> > Nit: we usually use EILSEQ for unreachable conditions, but here it's a
> > toolstack controlled input.  Maybe we could instead use ENODEV here?
> > 
> > As it's not really an illegal restore stream, but the selected guest
> > configuration doesn't match what's then loaded.
> 
> I'm a "working bee" here and can change it once again her to -ENODEV here.
> But It will be really cool if it will be agreed on Maintainers level somehow.
> 
> EILSEQ was used as per [2]

Didn't know it was explicitly requested, then leave it like that and
ignore this comment.

Thanks, Roger.

Reply via email to