On 16.10.2025 17:07, Roger Pau Monné wrote:
> On Thu, Oct 16, 2025 at 01:47:38PM +0200, Jan Beulich wrote:
>> On 16.10.2025 12:24, Roger Pau Monné wrote:
>>> On Thu, Oct 16, 2025 at 09:31:21AM +0200, Jan Beulich wrote:
>>>> @@ -454,9 +456,21 @@ static struct hpet_event_channel *hpet_g
>>>>      if ( num_hpets_used >= nr_cpu_ids )
>>>>          return &hpet_events[cpu];
>>>>  
>>>> +    /*
>>>> +     * Try the least recently used channel first.  It may still have its 
>>>> IRQ's
>>>> +     * affinity set to the desired CPU.  This way we also limit having 
>>>> multiple
>>>> +     * of our IRQs raised on the same CPU, in possibly a nested manner.
>>>> +     */
>>>> +    ch = per_cpu(lru_channel, cpu);
>>>> +    if ( ch && !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
>>>> +    {
>>>> +        ch->cpu = cpu;
>>>> +        return ch;
>>>> +    }
>>>> +
>>>> +    /* Then look for an unused channel. */
>>>>      next = arch_fetch_and_add(&next_channel, 1) % num_hpets_used;
>>>>  
>>>> -    /* try unused channel first */
>>>>      for ( i = next; i < next + num_hpets_used; i++ )
>>>>      {
>>>>          ch = &hpet_events[i % num_hpets_used];
>>>> @@ -479,6 +493,8 @@ static void set_channel_irq_affinity(str
>>>>  {
>>>>      struct irq_desc *desc = irq_to_desc(ch->msi.irq);
>>>>  
>>>> +    per_cpu(lru_channel, ch->cpu) = ch;
>>>> +
>>>>      ASSERT(!local_irq_is_enabled());
>>>>      spin_lock(&desc->lock);
>>>>      hpet_msi_mask(desc);
>>>
>>> Maybe I'm missing the point here, but you are resetting the MSI
>>> affinity anyway here, so there isn't much point in attempting to
>>> re-use the same channel when Xen still unconditionally goes through the
>>> process of setting the affinity anyway?
>>
>> While still using normal IRQs, there's still a benefit: We can re-use the
>> same vector (as staying on the same CPU), and hence we save an IRQ
>> migration (being the main source of nested IRQs according to my
>> observations).
> 
> Hm, I see.  You short-circuit all the logic in _assign_irq_vector().
> 
>> We could actually do even better, by avoiding the mask/unmask pair there,
>> which would avoid triggering the "immediate" IRQ that I (for now) see as
>> the only explanation of the large amount of "early" IRQs that I observe
>> on (at least) Intel hardware. That would require doing the msg.dest32
>> check earlier, but otherwise looks feasible. (Actually, the unmask would
>> still be necessary, in case we're called with the channel already masked.)
> 
> Checking with .dest32 seems a bit crude, I would possibly prefer to
> slightly modify hpet_attach_channel() to notice when ch->cpu == cpu
> and avoid the call to set_channel_irq_affinity()?

That would be an always-false condition, wouldn't it? "attach" and "detach"
are used strictly in pairs, and after "detach" ch->cpu != cpu.

Jan

Reply via email to