On Wed, Sep 27, 2023 at 11:55:19AM +0200, Jan Beulich wrote:
> On 27.09.2023 10:51, Roger Pau Monné wrote:
> > On Wed, May 03, 2023 at 05:54:47PM +0200, Jan Beulich wrote:
> > I guess we should also zap the runstate handlers, or else we might
> > corrupt guest state.
>
> So you think the guest might re-register a different area post resume?
> I can certainly add another patch to zap the handles; I don't think it
> should be done right here, not the least because if we see room for
> such behavior, that change may even want backporting.
For correctness it would be good to zap them, but there's no rush, as
I do think guests will set to the same address on resume.
> >> @@ -1568,6 +1572,19 @@ void unmap_vcpu_info(struct vcpu *v)
> >> put_page_and_type(mfn_to_page(mfn));
> >> }
> >>
> >> +/*
> >> + * This is only intended to be used for domain cleanup (or more generally
> >> only
> >> + * with at least the respective vCPU, if it's not the current one,
> >> reliably
> >> + * paused).
> >> + */
> >> +void unmap_guest_area(struct vcpu *v, struct guest_area *area)
> >
> > vcpu param could be const given the current code, but I assume this
> > will be changed by future patches. Same could be said about
> > guest_area but I'm confident that will change.
>
> True for both.
>
> >> +{
> >> + struct domain *d = v->domain;
> >> +
> >> + if ( v != current )
> >> + ASSERT(atomic_read(&v->pause_count) |
> >> atomic_read(&d->pause_count));
> >
> > Isn't this racy?
>
> It is, yes.
>
> > What guarantees that the vcpu won't be kicked just
> > after the check has been performed?
>
> Nothing. This check isn't any better than assertions towards an ordinary
> spinlock being held. I assume you realize that we've got a number of such
> assertions elsewhere already.
Right, but different from spinlock assertions, the code here could be
made safe just by pausing the vCPU?
Thanks, Roger.