On 10/22/25 13:59, Nikita Novikov wrote:
Fixes: ec03dd972378 ("accel/tcg: Hoist first page lookup above pointer_wrap")

This cannot be true, btw, because ...

-        if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
+        if (mmu_lookup1(cpu, &l->page[1], l->memop, l->mmu_idx, type, ra)) {

... this line did not change with that patch.

               uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
               l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];
           }

How is the memop really applicable to the second half of a split-page operation?

Because the second half is still part of the same guest memory operation. It 
must obey
the same size, alignment, and atomicity rules. Passing the real memop ensures 
correct
alignment and atomic checks even if the access crosses a page boundary.

How?

Let's use a concrete example: Access MO_64 | MO_UNALN at 0x1fffd.

The first tlb_fill gets to see the start address 0x1fffd, and the length 3 (and also the memop).

The second tlb_fill gets to see the second page address 0x20000 and the length 5 (but not the memop).

Exactly what is the second tlb_fill going to do with 0x20000 and MO_64 | 
MO_UNALN?



r~

Reply via email to