On 12.11.2025 11:51, Mykyta Poturai wrote:
> This will reduce code duplication for the upcoming cpu hotplug support
> on Arm64 patch.
>
> SMT-disable enforcement check is moved into a separate
> architecture-specific function.
>
> Signed-off-by: Mykyta Poturai <[email protected]>
Solely from an x86 perspective this looks okay to me, but on Arm you introduce
...
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -16,6 +16,7 @@
> * GNU General Public License for more details.
> */
>
> +#include <xen/cpu.h>
> #include <asm/hardirq.h>
> #include <asm/processor.h>
> #include <xen/spinlock.h>
> @@ -104,6 +105,37 @@ void smp_call_function_interrupt(void)
> irq_exit();
> }
>
> +long cf_check cpu_up_helper(void *data)
> +{
> + unsigned int cpu = (unsigned long)data;
> + int ret = cpu_up(cpu);
> +
> + /* Have one more go on EBUSY. */
> + if ( ret == -EBUSY )
> + ret = cpu_up(cpu);
> +
> + if ( !ret && arch_smt_cpu_disable(cpu) )
> + {
> + ret = cpu_down_helper(data);
> + if ( ret )
> + printk("Could not re-offline CPU%u (%d)\n", cpu, ret);
> + else
> + ret = -EPERM;
> + }
> +
> + return ret;
> +}
> +
> +long cf_check cpu_down_helper(void *data)
> +{
> + int cpu = (unsigned long)data;
> + int ret = cpu_down(cpu);
> + /* Have one more go on EBUSY. */
> + if ( ret == -EBUSY )
> + ret = cpu_down(cpu);
> + return ret;
> +}
...unreachable code, which - for the case when RUNTIME_CPU_CONTROL=n - won't
even be rectified by the next patch.
Jan