On 15.03.2024 11:59, Carlo Nonato wrote:
> From: Luca Miccio <lucmic...@gmail.com>
> 
> Add a new command line parameter to configure Xen cache colors.
> These colors can be dumped with the cache coloring info debug-key.
> 
> By default, Xen uses the first color.
> Benchmarking the VM interrupt response time provides an estimation of
> LLC usage by Xen's most latency-critical runtime task. Results on Arm
> Cortex-A53 on Xilinx Zynq UltraScale+ XCZU9EG show that one color, which
> reserves 64 KiB of L2, is enough to attain best responsiveness:
> - Xen 1 color latency:  3.1 us
> - Xen 2 color latency:  3.1 us
> 
> More colors are instead very likely to be needed on processors whose L1
> cache is physically-indexed and physically-tagged, such as Cortex-A57.
> In such cases, coloring applies to L1 also, and there typically are two
> distinct L1-colors. Therefore, reserving only one color for Xen would
> senselessly partitions a cache memory that is already private, i.e.
> underutilize it.

Here you say that using just a single color is undesirable on such systems.

> The default amount of Xen colors is thus set to one.

Yet then, without any further explanation you conclude that 1 is the
universal default.

> @@ -147,6 +159,21 @@ void __init llc_coloring_init(void)
>          panic("Number of LLC colors (%u) not in range [2, %u]\n",
>                max_nr_colors, CONFIG_NR_LLC_COLORS);
>  
> +    if ( !xen_num_colors )
> +    {
> +        unsigned int i;
> +
> +        xen_num_colors = MIN(XEN_DEFAULT_NUM_COLORS, max_nr_colors);
> +
> +        printk(XENLOG_WARNING
> +               "Xen LLC color config not found. Using first %u colors\n",
> +               xen_num_colors);
> +        for ( i = 0; i < xen_num_colors; i++ )
> +            xen_colors[i] = i;
> +    }
> +    else if ( !check_colors(xen_colors, xen_num_colors) )
> +        panic("Bad LLC color config for Xen\n");

This "else" branch again lacks a bounds check against max_nr_colors, if
I'm not mistaken.

Jan

Reply via email to