On 22/10/2025 22:20, Richard Henderson wrote:
On 10/22/25 13:59, Nikita Novikov wrote:
Fixes: ec03dd972378 ("accel/tcg: Hoist first page lookup above
pointer_wrap")
This cannot be true, btw, because ...
- if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
+ if (mmu_lookup1(cpu, &l->page[1], l->memop, l->mmu_idx,
type, ra)) {
... this line did not change with that patch.
My bad, sorry. I suggest it will be better to use `Related-to`, because
actually this patch makes both halves of a split-page access consistent
or did I misunderstand you?
uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
l->page[0].full =
&cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];
}
How is the memop really applicable to the second half of a
split-page operation?
Because the second half is still part of the same guest memory
operation. It must obey
the same size, alignment, and atomicity rules. Passing the real memop
ensures correct
alignment and atomic checks even if the access crosses a page boundary.
How?
Let's use a concrete example: Access MO_64 | MO_UNALN at 0x1fffd.
The first tlb_fill gets to see the start address 0x1fffd, and the
length 3 (and also the memop).
The second tlb_fill gets to see the second page address 0x20000 and
the length 5 (but not the memop).
Exactly what is the second tlb_fill going to do with 0x20000 and MO_64
| MO_UNALN?
The guest does a 64-bit access with flags MO_64 | MO_UNALN at address
0x1fffd. The first `tlb_fill` handles bytes from 0x1fffd-0x1ffff with
length 3, and sees the real `memop`, so it knows the access is unaligned
but allowed. The second `tlb_fill` handles bytes 0x20000-0x20004 with
length 5. If it gets `memop == 0`, it loses information about the access
type, alignment and atomicity. With the real memop it stays consistent
with the first half, because it knows this is part of one 64-bit
unaligned access, not a separate normal one. For `MO_UNALN` it changes
nothing, but for atomic or alignment-restricted cases it prevents the
second page from being treated incorrectly as a normal split access.
Also it does not raise an alignment fault, and sets the same slow-path
flags (ex. `TLB_CHECK_ALIGNED`) and region checks.
I faced with this bug while debugging a case where `memop_size == 1`,
because `memop_size(0) == 1`. According to the code it means the second
half doesn’t carry any valid access information, but it's not true.
r~