>>> On 27.01.18 at 02:27, wrote:
>> On 25/01/18 16:09, Andrew Cooper wrote:
>> > On 25/01/18 15:57, Jan Beulich wrote:
>> > > > > >
>> > > For the record, the overwhelming majority of calls to
>> > > __sync_local_execstate() being responsible for the behavior
>> > > come
> On 25/01/18 16:09, Andrew Cooper wrote:
> > On 25/01/18 15:57, Jan Beulich wrote:
> > > > > >
> > > For the record, the overwhelming majority of calls to
> > > __sync_local_execstate() being responsible for the behavior
> > > come from invalidate_interrupt(), which suggests to me that
> > >
On Fri, 2018-01-26 at 02:43 -0700, Jan Beulich wrote:
> > > > On 26.01.18 at 02:08, wrote:
> > And in order to go and investigate this a bit further, Jan, what is
> > it
> > that you were doing when you saw what you described above? AFAIUI,
> > that's booting an HVM guest,
>>> On 26.01.18 at 02:08, wrote:
> On Thu, 2018-01-25 at 19:49 +0100, Dario Faggioli wrote:
>> On Thu, 2018-01-25 at 16:09 +, Andrew Cooper wrote:
>> > On 25/01/18 15:57, Jan Beulich wrote:
>> > >
>> > > For the record, the overwhelming majority of calls to
>> > >
On Thu, 2018-01-25 at 19:49 +0100, Dario Faggioli wrote:
> On Thu, 2018-01-25 at 16:09 +, Andrew Cooper wrote:
> > On 25/01/18 15:57, Jan Beulich wrote:
> > >
> > > For the record, the overwhelming majority of calls to
> > > __sync_local_execstate() being responsible for the behavior
> > >
On Thu, 2018-01-25 at 16:09 +, Andrew Cooper wrote:
> On 25/01/18 15:57, Jan Beulich wrote:
> > > > >
> > For the record, the overwhelming majority of calls to
> > __sync_local_execstate() being responsible for the behavior
> > come from invalidate_interrupt(), which suggests to me that
> >
On 25/01/18 16:31, Jan Beulich wrote:
On 25.01.18 at 17:09, wrote:
>> On 25/01/18 15:57, Jan Beulich wrote:
>> On 24.01.18 at 14:12, wrote:
@@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu
*next)
>>> On 25.01.18 at 17:09, wrote:
> On 25/01/18 15:57, Jan Beulich wrote:
> On 24.01.18 at 14:12, wrote:
>>> @@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu
>>> *next)
>>> }
>>>
>>>
CC'ing Dario with a working email address this time...
On 25/01/18 16:09, Andrew Cooper wrote:
> On 25/01/18 15:57, Jan Beulich wrote:
> On 24.01.18 at 14:12, wrote:
>>> @@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu
>>> *next)
>>>
On 25/01/18 15:57, Jan Beulich wrote:
On 24.01.18 at 14:12, wrote:
>> @@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu
>> *next)
>> }
>>
>> ctxt_switch_levelling(next);
>> +
>> +if ( opt_ibpb &&
>>> On 24.01.18 at 14:12, wrote:
> @@ -1743,6 +1744,34 @@ void context_switch(struct vcpu *prev, struct vcpu
> *next)
> }
>
> ctxt_switch_levelling(next);
> +
> +if ( opt_ibpb && !is_idle_domain(nextd) )
Is the idle domain check here really
On Wed, Jan 24, 2018 at 02:31:20PM +, David Woodhouse wrote:
> On Wed, 2018-01-24 at 13:49 +, Andrew Cooper wrote:
> > On 24/01/18 13:34, Woodhouse, David wrote:
> > > I am loath to suggest *more* tweakables, but given the IBPB cost is
> > > there any merit in having a mode which does it
On Wed, 2018-01-24 at 13:49 +, Andrew Cooper wrote:
> On 24/01/18 13:34, Woodhouse, David wrote:
> > I am loath to suggest *more* tweakables, but given the IBPB cost is
> > there any merit in having a mode which does it only if the *domain* is
> > different, regardless of vcpu_id?
>
> This
On 24/01/18 13:34, Woodhouse, David wrote:
> On Wed, 2018-01-24 at 13:12 +, Andrew Cooper wrote:
>> + * Squash the domid and vcpu id together for comparason
> *comparison
>
>> + * efficiency. We could in principle stash and compare the
>> struct
>> + *
On Wed, 2018-01-24 at 13:12 +, Andrew Cooper wrote:
> + * Squash the domid and vcpu id together for comparason
*comparison
> + * efficiency. We could in principle stash and compare the
> struct
> + * vcpu pointer, but this risks a false alias if a domain
Issuing an IBPB command flushes the Branch Target Buffer, so that any poison
left by one vcpu won't remain when beginning to execute the next.
The cost of IBPB is substantial, and skipped on transition to idle, as Xen's
idle code is robust already. All transitions into vcpu context are fully
16 matches
Mail list logo