On 02.02.21 17:44, Bram Hooimeijer wrote:
> Upon cell creation, the ROOT_COS CBM is shrinked to prevent overlap
> between the ROOT_COS CBM and the CBM of the newly created cell. Here,
> the new ROOT_COS CBM is prevented from becoming empty, and in doing so,
> the ROOT_COS CBM potentially moves to free bits.
> 
> There are two additional situations that need to be reflected by the
> code:
> 1. It is possible that bits which are listed as 'freed' are also taken
> by the newly created cell. Given that the freed_mask is not yet updated
> with the newly created cell, it should be checked explicitly whether the
> freed cells prevent the ROOT_COS CBM from becoming empty after moving to
> the freed bits.
> 2. Even if the ROOT_COS CBM is not going empty, it might still overlap
> with the newly created cell. Hence, moving to free bits should not skip
> the shrinking, in the else clause.
> 
> A minimal example to demostrate this bug for a 10-way (bit) CBM:
> jailhouse enable: creates a root cell with COS0 and CBM 3FF
> jailhouse cell create: creates cell1 with COS1 and CBM 1F0
>   The free bits are now 20F, but as the CBM has to be contiguous, the
>   root cell has CBM 200. Free bits are 00F.
> jailhouse cell create: creates cell2 with COS2 and CBM 3FC
>   Cell cell2 requires the top most bit (200), hence the root COS CBM
>   takes free bits 00F.
>   BUG: Root COS CBM gets bits 00F, overlapping with COS2
>   FIX: Root COS CBM is shrinked to 003, preventing overlap with the new
>   cell cell2.
> 
> Fixes: 3f04eb1753bb ("x86: Introduce Cache Allocation Technology support
> for Intel CPUs")
> 
> Signed-off-by: Bram Hooimeijer <[email protected]>
> ---
>  hypervisor/arch/x86/cat.c | 53 +++++++++++++++++++++------------------
>  1 file changed, 28 insertions(+), 25 deletions(-)
> 
> diff --git a/hypervisor/arch/x86/cat.c b/hypervisor/arch/x86/cat.c
> index f4c6f5d6..c91560a2 100644
> --- a/hypervisor/arch/x86/cat.c
> +++ b/hypervisor/arch/x86/cat.c
> @@ -105,32 +105,35 @@ static bool shrink_root_cell_mask(u64 cell_mask)
>  
>               root_cell.arch.cat_mask = 0;
>               merge_freed_mask_to_root();
> -     } else {
> -             /* Shrink the root cell's mask. */
> -             root_cell.arch.cat_mask &= ~cell_mask;
> +             /* Check again, as freed bits might also be in cell_mask */
> +             if ((root_cell.arch.cat_mask & ~cell_mask) == 0)
> +                     return false;

This does not look correct yet. If we fail late here, after the merge,
we already modified root_cell.arch.cat_mask, no?

> +     }
> +     /* Shrink the root cell's mask. Might still be required if freed
> +      * bits overlap with new mask. */
> +     root_cell.arch.cat_mask &= ~cell_mask;
>  
> -             /*
> -              * Ensure that the root mask is still contiguous:
> -              *
> -              * Check if taking out the new cell's mask from the root mask
> -              * created two halves there. Then shrink the root mask
> -              * additionally by the lower half and add that part to the
> -              * freed mask.
> -              *
> -              * Always removing the lower half simplifies this algorithm at
> -              * the price of possibly choosing the smaller sub-mask. Cell
> -              * configurations can avoid this by locating non-root cell
> -              * masks at the beginning of the L3 cache.
> -              */
> -             lo_mask_start = ffsl(root_cell.arch.cat_mask);
> -             lo_mask_len = ffzl(root_cell.arch.cat_mask >> lo_mask_start);
> -             lo_mask = BIT_MASK(lo_mask_start + lo_mask_len - 1,
> -                                lo_mask_start);
> -
> -             if (root_cell.arch.cat_mask & ~lo_mask) {
> -                     root_cell.arch.cat_mask &= ~lo_mask;
> -                     freed_mask |= lo_mask;
> -             }
> +     /*
> +     * Ensure that the root mask is still contiguous:
> +     *
> +     * Check if taking out the new cell's mask from the root mask
> +     * created two halves there. Then shrink the root mask
> +     * additionally by the lower half and add that part to the
> +     * freed mask.
> +     *
> +     * Always removing the lower half simplifies this algorithm at
> +     * the price of possibly choosing the smaller sub-mask. Cell
> +     * configurations can avoid this by locating non-root cell
> +     * masks at the beginning of the L3 cache.
> +     */

Comment indention was damaged.

> +     lo_mask_start = ffsl(root_cell.arch.cat_mask);
> +     lo_mask_len = ffzl(root_cell.arch.cat_mask >> lo_mask_start);
> +     lo_mask = BIT_MASK(lo_mask_start + lo_mask_len - 1,
> +                             lo_mask_start);
> +
> +     if (root_cell.arch.cat_mask & ~lo_mask) {
> +             root_cell.arch.cat_mask &= ~lo_mask;
> +             freed_mask |= lo_mask;
>       }
>  
>       /* Cells using the root COS are also affected by shrinking. */
> 

Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/689479b7-b16e-2716-8802-aa4705cad6f8%40siemens.com.

Reply via email to