On 15.05.2025 12:52, Roger Pau Monné wrote:
> On Mon, May 12, 2025 at 05:38:07PM +0200, Jan Beulich wrote:
>> On 06.05.2025 14:55, Roger Pau Monné wrote:
>>> On Tue, May 06, 2025 at 12:16:00PM +0100, Andrew Cooper wrote:
>>>> On 06/05/2025 9:31 am, Roger Pau Monne wrote:
>>>>> When a guest is allowed access to cache control operations such tracking
>>>>> prevents having to issue a system-wide cache flush, and rather just flush
>>>>> the pCPUs where the vCPU has been scheduled since the last flush.
>>>>>
>>>>> Note that domain-wide flushes accumulate the dirty caches from all the
>>>>> vCPUs, but clearing the vCPU masks will require pausing all vCPUs, which
>>>>> seems overkill.  Instead leave the vCPU dirty masks as-is, worse case it
>>>>> will result in redundant flushes in further calls.
>>>>>
>>>>> Signed-off-by: Roger Pau Monné <roger....@citrix.com>
>>>>
>>>> I'm afraid this doesn't work.
>>>>
>>>> Unlike TLBs, dirty cacheline can move sideways, e.g. by foreign or grant
>>>> mapping, but also naturally because of how cache coherency works.
>>>
>>> Does such sideway moving also imply that local WB{NO,}INVD on native
>>> could be equally bogus?
>>>
>>> According to the SDM, cache lines can indeed move between processor
>>> caches, but the memory controller must always snoop such moves and
>>> flush the data to memory:
>>>
>>> "Here, the processor with the valid data may pass the data to the
>>> other processors without actually writing it to system memory;
>>> however, it is the responsibility of the memory controller to snoop
>>> this operation and update memory."
>>>
>>> So a cache line moving sideways will always be propagated to memory as
>>> part of the move, and hence the data in the previous pCPU cache will
>>> always hit memory.
>>
>> But that's only one of the two aspects of a flush. The other is to ensure
>> respective data isn't in any (covered) cache anymore. IOW dirty-ness (as
>> the title has it) isn't a criteria, unless of course you mean "dirty" in
>> a sense different from what it means in the cache coherency model.
> 
> Given the direct map, and the fact that the CPU can speculatively load
> entries in the cache, I'm not sure there's much Xen can effectively do
> ATM to ensure a certain cache line it's not in any cache anymore.
> 
> It would possibly get better if/when the direct map is removed, but
> even then Xen (or dom0) might map guest memory for emulation or IO
> purposes.  Then Xen/dom0 would need to issue a wbinvd after unmapping
> the memory, to ensure the cache is clean in case a vCPU from a domain
> is scheduled there.
> 
> IMO being realistic it is very unlikely for Xen to be able to ensure
> that after a guest wbinvd there are no guest owned cache lines in any
> CPU cache, even if such wbinvd is propagated to all host CPUs.

Well, see Andrew's reply on one of the two "restrict guest-induced WBINVD"
of mine. What you say is effectively supporting the statement I make in
the descriptions there ("We allow its use for writeback purposes only
anyway, ..."). Which Andrew says is wrong for at least future use cases,
possibly even today's. IOW I think we first need to find some common
grounds on the underlying goals / principles.

Jan

Reply via email to