On 01.02.2022 11:46, Roger Pau Monné wrote:
> On Thu, Jan 27, 2022 at 04:13:21PM +0100, Jan Beulich wrote:
>> From: Artem Bityutskiy <artem.bityuts...@linux.intel.com>
>>
>> Enable local interrupts before requesting C1 on the last two generations
>> of Intel Xeon platforms: Sky Lake, Cascade Lake, Cooper Lake, Ice Lake.
>> This decreases average C1 interrupt latency by about 5-10%, as measured
>> with the 'wult' tool.
>>
>> The '->enter()' function of the driver enters C-states with local
>> interrupts disabled by executing the 'monitor' and 'mwait' pair of
>> instructions. If an interrupt happens, the CPU exits the C-state and
>> continues executing instructions after 'mwait'. It does not jump to
>> the interrupt handler, because local interrupts are disabled. The
>> cpuidle subsystem enables interrupts a bit later, after doing some
>> housekeeping.
>>
>> With this patch, we enable local interrupts before requesting C1. In
>> this case, if the CPU wakes up because of an interrupt, it will jump
>> to the interrupt handler right away. The cpuidle housekeeping will be
>> done after the pending interrupt(s) are handled.
>>
>> Enabling interrupts before entering a C-state has measurable impact
>> for faster C-states, like C1. Deeper, but slower C-states like C6 do
>> not really benefit from this sort of change, because their latency is
>> a lot higher comparing to the delay added by cpuidle housekeeping.
>>
>> This change was also tested with cyclictest and dbench. In case of Ice
>> Lake, the average cyclictest latency decreased by 5.1%, and the average
>> 'dbench' throughput increased by about 0.8%. Both tests were run for 4
>> hours with only C1 enabled (all other idle states, including 'POLL',
>> were disabled). CPU frequency was pinned to HFM, and uncore frequency
>> was pinned to the maximum value. The other platforms had similar
>> single-digit percentage improvements.
>>
>> It is worth noting that this patch affects 'cpuidle' statistics a tiny
>> bit.  Before this patch, C1 residency did not include the interrupt
>> handling time, but with this patch, it will include it. This is similar
>> to what happens in case of the 'POLL' state, which also runs with
>> interrupts enabled.
>>
>> Suggested-by: Len Brown <len.br...@intel.com>
>> Signed-off-by: Artem Bityutskiy <artem.bityuts...@linux.intel.com>
>> [Linux commit: c227233ad64c77e57db738ab0e46439db71822a3]
>>
>> We don't have a pointer into cpuidle_state_table[] readily available.
>> To compensate, propagate the flag into struct acpi_processor_cx.
>>
>> Unlike Linux we want to
>> - disable IRQs again after MWAITing, as subsequently invoked functions
>>   assume so,
>> - avoid enabling IRQs if cstate_restore_tsc() is not a no-op, to avoid
>>   interfering with, in particular, the time rendezvous.
>>
>> Signed-off-by: Jan Beulich <jbeul...@suse.com>
> 
> Acked-by: Roger Pau Monné <roger....@citrix.com>

Thanks.

>> ---
>> RFC: I'm not entirely certain that we want to take this, i.e. whether
>>      we're as much worried about interrupt latency.
> 
> I would assume taking this would make it easier for you to pick
> further patches.

At least a little, yes.

>> --- a/xen/include/xen/cpuidle.h
>> +++ b/xen/include/xen/cpuidle.h
>> @@ -42,6 +42,7 @@ struct acpi_processor_cx
>>      u8 idx;
>>      u8 type;         /* ACPI_STATE_Cn */
>>      u8 entry_method; /* ACPI_CSTATE_EM_xxx */
>> +    bool irq_enable_early;
> 
> Should you use a bit field here and limit the field to :1 in
> expectation of maybe adding more flags at a later point?

Well, I had considered doing so but then thought we can easily
switch at such future time, leaving it consistently witout bit
fields for now. The more that there's still a 32-bit padding
field left to put stuff in.

Jan


Reply via email to