To do a write to memory that is marked as notdirty, we need to invalidate any TBs we have cached for that memory, and update the cpu physical memory dirty flags for VGA and migration. The slowpath code in notdirty_mem_write() does all this correctly, but the new atomic handling code in atomic_mmu_lookup() doesn't do anything at all, it just clears the dirty bit in the TLB. The effect of this bug is that if the first write to a notdirty page for which we have cached TBs is by a guest atomic access, we fail to invalidate the TBs and subsequently will execute incorrect code. This can be seen by trying to run 'javac' on AArch64. The first patch here refactors notdirty_mem_write() to pull out the "correctly handle dirty bit updates" parts of the code into two new functions memory_notdirty_write_prepare() and memory_notdirty_write_complete(). The second patch then uses those functions to fix the atomic helpers.
In an ideal world I'd like to get this fix into rc2 tomorrow so it gets wider testing exposure before release. thanks -- PMM Peter Maydell (2): exec.c: Factor out before/after actions for notdirty memory writes accel/tcg: Handle atomic accesses to notdirty memory correctly accel/tcg/atomic_template.h | 12 ++++++++ include/exec/memory-internal.h | 56 ++++++++++++++++++++++++++++++++++++ accel/tcg/cputlb.c | 29 ++++++++++--------- accel/tcg/user-exec.c | 1 + exec.c | 65 ++++++++++++++++++++++++++++-------------- 5 files changed, 129 insertions(+), 34 deletions(-) -- 2.7.4