On Fri, 2016-06-24 at 13:25 +, Wu, Feng wrote:
> >
> > Then, in this case, the reason why we are sure that all the pcpus
> > are
> > executing the body of the tasklet, is indeed the structure of
> > stop_machine_run() and stopmachine_action() themselves, which are
> > built
> > to make sure
> > Thanks for your replay. Yes, I think this is point. Here descheduling
> > of vCPU3
> > happens, and the reason we will choose the tasklet as the next
> > running
> > unit for sure (not choosing another vCPU or vCPU3 itself as the next
> > running unit) is because tasklet will overrides all
On Fri, 2016-06-24 at 07:59 +, Wu, Feng wrote:
> > -Original Message-
> > From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> > So, vCPU 3 was running, but then some called stop_machine_run(),
> > which
> > causes the descheduling of vCPU 3, and the execution of the
> >
ge.dun...@eu.citrix.com; andrew.coop...@citrix.com; xen-
> de...@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 0/3] VMX: Properly handle pi descriptor and
> per-cpu blocking list
>
> On Fri, 2016-06-24 at 06:11 +, Wu, Feng wrote:
> > > -Original Message-
On Fri, 2016-06-24 at 06:11 +, Wu, Feng wrote:
> > -Original Message-
> > From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> > No, because we call cpu_disable_scheduler() from __cpu_disable(),
> > only
> > when system state is SYS_STATE_suspend already, and hence we take
> > the
ge.dun...@eu.citrix.com; andrew.coop...@citrix.com; xen-
> de...@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 0/3] VMX: Properly handle pi descriptor and
> per-cpu blocking list
>
> On Thu, 2016-06-23 at 12:33 +, Wu, Feng wrote:
> > > -Original Message-
On Thu, 2016-06-23 at 12:33 +, Wu, Feng wrote:
> > -Original Message-
> > From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> >
> > It goes through all the vcpus of all domains, and does not check or
> > care whether they are running, runnable or blocked.
> >
> > Let's look at
> -Original Message-
> From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> Sent: Tuesday, May 24, 2016 10:02 PM
> To: Wu, Feng ; Jan Beulich
> Cc: andrew.coop...@citrix.com; george.dun...@eu.citrix.com; Tian, Kevin
> ;
> -Original Message-
> From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> Sent: Tuesday, May 24, 2016 10:47 PM
> To: Wu, Feng ; Jan Beulich
> Cc: andrew.coop...@citrix.com; george.dun...@eu.citrix.com; Tian, Kevin
> ;
> -Original Message-
> From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> Sent: Tuesday, May 24, 2016 10:02 PM
> To: Wu, Feng ; Jan Beulich
> Cc: andrew.coop...@citrix.com; george.dun...@eu.citrix.com; Tian, Kevin
> ;
On Tue, 2016-05-24 at 13:33 +, Wu, Feng wrote:
> > From: Wu, Feng
> > > From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> > >
> > > If a
> > > vCPU is blocker, there is nothing to do, and in fact, nothing
> > > happens
> > > (as vcpu_sleep_nosync() and vcpu_wake() are NOP in that
On Tue, 2016-05-24 at 10:07 +, Wu, Feng wrote:
> > See, for instance, cpu_disable_scheduler() in schedule.c. What we
> > do is
> > go over all the vcpus of all domains of either the system or the
> > cpupool, and force the ones that we found with v->processor set to
> > the
> > pCPU that is
> -Original Message-
> From: Wu, Feng
> Sent: Tuesday, May 24, 2016 6:08 PM
> To: Dario Faggioli ; Jan Beulich
>
> Cc: andrew.coop...@citrix.com; george.dun...@eu.citrix.com; Tian, Kevin
> ; xen-devel@lists.xen.org;
> -Original Message-
> From: Dario Faggioli [mailto:dario.faggi...@citrix.com]
> Sent: Monday, May 23, 2016 8:39 PM
> To: Jan Beulich ; Wu, Feng
> Cc: andrew.coop...@citrix.com; george.dun...@eu.citrix.com; Tian, Kevin
> ;
On Mon, 2016-05-23 at 02:51 -0600, Jan Beulich wrote:
> > > > On 23.05.16 at 10:44, wrote:
> > >
> > > vCPU-s currently having their v->processor set to the pCPU being
> > > hot removed would simply get migrated elsewhere. If that's not
> > > accompanied by respective PI
>>> On 23.05.16 at 10:44, wrote:
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: Monday, May 23, 2016 4:09 PM
>> >>> On 20.05.16 at 12:46, wrote:
>> > If this is the case, it can address part of my concern. Another
>> > concern is
>> > if a vCPU is
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Monday, May 23, 2016 4:09 PM
> To: Wu, Feng
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin ; xen-
>
>>> On 20.05.16 at 12:46, wrote:
>
>> -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: Friday, May 20, 2016 6:27 PM
>> To: Wu, Feng
>> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
>>
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Friday, May 20, 2016 6:27 PM
> To: Wu, Feng
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin ; xen-
>
>>> On 20.05.16 at 10:53, wrote:
> I still have two opens, which needs comments/sugguestions from you guys.
> - What shoule we do for the per-cpu blocking list during vcpu hotplug?
What do you mean with vcpu hotplug? vcpus never get removed
from a VM (from hypervisor
The current VT-d PI related code may operate incorrectly in the following
scenarios:
- When the last assigned device is dettached from the domain, all
the PI related hooks are removed then, however, the vCPU can be
blocked, switched to another pCPU, etc, all without the aware of
PI. After the next
21 matches
Mail list logo