On 02/08/2017 06:25 PM, Tamas K Lengyel wrote: > > > On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru > <rcojoc...@bitdefender.com <mailto:rcojoc...@bitdefender.com>> wrote: > > It is currently possible for the guest to lock when subscribing > to synchronous vm_events if max_vcpus is larger than the > number of available ring buffer slots. This patch no longer > blocks already paused VCPUs, fixing the issue for this use > case. > > Signed-off-by: Razvan Cojocaru <rcojoc...@bitdefender.com > <mailto:rcojoc...@bitdefender.com>> > --- > xen/common/vm_event.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c > index 82ce8f1..2005a64 100644 > --- a/xen/common/vm_event.c > +++ b/xen/common/vm_event.c > @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d, > * See the comments above wake_blocked() for more information > * on how this mechanism works to avoid waiting. */ > avail_req = vm_event_ring_available(ved); > - if( current->domain == d && avail_req < d->max_vcpus ) > + if( current->domain == d && avail_req < d->max_vcpus && > + !atomic_read( ¤t->vm_event_pause_count ) ) > vm_event_mark_and_pause(current, ved); > > > Hi Razvan, > I would also like to have the change made in this patch that unblocks > the vCPUs as soon as a spot opens up on the ring. Doing just what this > patch has will not solve the problem if there are asynchronous events used.
Fair enough, I thought that might need more discussion and thus put into a subsequent patch, but I'll modify that as well, give it a spin and submit V2. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel