On 14.11.2019 15:42, Roger Pau Monné wrote:
> On Thu, Nov 14, 2019 at 02:35:56PM +0100, Jan Beulich wrote:
>> On 13.11.2019 16:59, Roger Pau Monne wrote:
>>> +for ( id = find_first_bit(vcpus, d->max_vcpus);
>>> + id < d->max_vcpus;
>>> + id = find_next_bit(vcpus,
On Thu, Nov 14, 2019 at 02:35:56PM +0100, Jan Beulich wrote:
> On 13.11.2019 16:59, Roger Pau Monne wrote:
> > +for ( id = find_first_bit(vcpus, d->max_vcpus);
> > + id < d->max_vcpus;
> > + id = find_next_bit(vcpus, d->max_vcpus, id + 1) )
> > +{
> > +if (
On 13.11.2019 16:59, Roger Pau Monne wrote:
> @@ -5266,6 +5267,36 @@ void hvm_set_segment_register(struct vcpu *v, enum
> x86_segment seg,
> alternative_vcall(hvm_funcs.set_segment_register, v, seg, reg);
> }
>
> +int hvm_intr_get_dests(struct domain *d, uint8_t dest, uint8_t dest_mode,
Ran same test and did not hit the issue.
Tested-by: Joe Jin
On 11/13/19 7:59 AM, Roger Pau Monne wrote:
> When using posted interrupts and the guest migrates MSI from vCPUs Xen
> needs to flush any pending PIRR vectors on the previous vCPU, or else
> those vectors could get wrongly injected at
When using posted interrupts and the guest migrates MSI from vCPUs Xen
needs to flush any pending PIRR vectors on the previous vCPU, or else
those vectors could get wrongly injected at a later point when the MSI
fields are already updated.
The usage of a fixed vCPU in lowest priority mode when