On Fri, 2023-06-02 at 17:58 +0100, Peter Maydell wrote:
> On Mon, 22 May 2023 at 19:52, David Woodhouse <dw...@infradead.org> wrote:
> > 
> > From: David Woodhouse <d...@amazon.co.uk>
> > 
> > Coverity points out (CID 1507534) that we sometimes access
> > env->xen_singleshot_timer_ns under the protection of
> > env->xen_timers_lock (eg in xen_vcpu_singleshot_timer_event()) and
> > sometimes not (the specific case Coverity complains about is in
> > do_vcpu_soft_reset()).
> > 
> > This isn't strictly an issue. There are two modes for the timers; if
> > the kernel supports the EVTCHN_SEND capability then it handles all the
> > timer hypercalls and delivery internally, and all we need to do is an
> > ioctl to get/set the next timer as part of the vCPU state. If the
> > kernel doesn't have that support, then we do all the emulation within
> > qemu, and *those* are the code paths where we actually care about the
> > locking.
> > 
> > But it doesn't hurt to be a little bit more consistent and avoid having
> > to explain *why* it's OK.
> > 
> > Signed-off-by: David Woodhouse <d...@amazon.co.uk>
> 
> Looking a bit more closely at the Coverity lists, there's also
> CID 1507968 which is for the access to env->xen_singleshot_timer_ns
> in kvm_get_xen_state(), which this patch doesn't touch, I think ?

That one is basically a false positive since it's for the case where
the kernel is handling the timers and all we do is an ioctl to read the
state. Which is *inherently* racy if any other vCPU can be running and
modifying them at the time. And doesn't have anything to race *with* if
not :)

On the other hand, the one in kvm_put_xen_state() which calls
do_set_singleshot_timer() is for the userspace case, and *that* one
probably wants locking. Again in practice it's probably never going to
have anything to race with, but that's a poor excuse. New patch
follows.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to