On 06.05.2025 14:55, Roger Pau Monné wrote:
> On Tue, May 06, 2025 at 12:16:00PM +0100, Andrew Cooper wrote:
>> On 06/05/2025 9:31 am, Roger Pau Monne wrote:
>>> When a guest is allowed access to cache control operations such tracking
>>> prevents having to issue a system-wide cache flush, and rather just flush
>>> the pCPUs where the vCPU has been scheduled since the last flush.
>>>
>>> Note that domain-wide flushes accumulate the dirty caches from all the
>>> vCPUs, but clearing the vCPU masks will require pausing all vCPUs, which
>>> seems overkill.  Instead leave the vCPU dirty masks as-is, worse case it
>>> will result in redundant flushes in further calls.
>>>
>>> Signed-off-by: Roger Pau Monné <roger....@citrix.com>
>>
>> I'm afraid this doesn't work.
>>
>> Unlike TLBs, dirty cacheline can move sideways, e.g. by foreign or grant
>> mapping, but also naturally because of how cache coherency works.
> 
> Does such sideway moving also imply that local WB{NO,}INVD on native
> could be equally bogus?
> 
> According to the SDM, cache lines can indeed move between processor
> caches, but the memory controller must always snoop such moves and
> flush the data to memory:
> 
> "Here, the processor with the valid data may pass the data to the
> other processors without actually writing it to system memory;
> however, it is the responsibility of the memory controller to snoop
> this operation and update memory."
> 
> So a cache line moving sideways will always be propagated to memory as
> part of the move, and hence the data in the previous pCPU cache will
> always hit memory.

But that's only one of the two aspects of a flush. The other is to ensure
respective data isn't in any (covered) cache anymore. IOW dirty-ness (as
the title has it) isn't a criteria, unless of course you mean "dirty" in
a sense different from what it means in the cache coherency model.

Jan

Reply via email to