Re: [Qemu-devel] [PATCH v2 16/28] s390x/tcg: Fault-safe memmove
On 12.09.19 00:03, Richard Henderson wrote: > On 9/6/19 3:57 AM, David Hildenbrand wrote: >> +static void access_memmove_idx(CPUS390XState *env, vaddr dest, vaddr src, >> + int size, int dest_idx, int src_idx, >> + uintptr_t ra) >> +{ >> +S390Access srca = access_prepare_idx(env, src, size, MMU_DATA_LOAD, >> src_idx, >> + ra); >> +S390Access desta = access_prepare_idx(env, dest, size, MMU_DATA_STORE, >> + dest_idx, ra); > > I was just thinking that it might be worth passing in these Access structures. > It seems that usually we wind up computing them in several locations. > > Hoisting it up it would make MVC look like > > midx = cpu_mmu_index(env); > srca = access_prepare_idx(env, src, size, LOAD, midx, ra); > dsta = access_prepare_idx(env, dst, size, STORE, midx, ra); > > if (dst == src + 1) { > uint8_t x = access_get_byte(env, , 0, ra); > access_memset(env, , x, size, ra); > } else if (!is_destructive_overlap(env, dst, src, size)) { > access_memmove(env, , , size, ra); > } else { > // byte by byte loop, but still need srca, dsta. > } > > which seems even More Correct, since the current access_memset path does not > check for read watchpoints or faults on all of [src, src+size-1]. > I had precisely that in previous versions :) Can switch to that model. -- Thanks, David / dhildenb
Re: [Qemu-devel] [PATCH v2 16/28] s390x/tcg: Fault-safe memmove
On 9/6/19 3:57 AM, David Hildenbrand wrote: > +static void access_memmove_idx(CPUS390XState *env, vaddr dest, vaddr src, > + int size, int dest_idx, int src_idx, > + uintptr_t ra) > +{ > +S390Access srca = access_prepare_idx(env, src, size, MMU_DATA_LOAD, > src_idx, > + ra); > +S390Access desta = access_prepare_idx(env, dest, size, MMU_DATA_STORE, > + dest_idx, ra); I was just thinking that it might be worth passing in these Access structures. It seems that usually we wind up computing them in several locations. Hoisting it up it would make MVC look like midx = cpu_mmu_index(env); srca = access_prepare_idx(env, src, size, LOAD, midx, ra); dsta = access_prepare_idx(env, dst, size, STORE, midx, ra); if (dst == src + 1) { uint8_t x = access_get_byte(env, , 0, ra); access_memset(env, , x, size, ra); } else if (!is_destructive_overlap(env, dst, src, size)) { access_memmove(env, , , size, ra); } else { // byte by byte loop, but still need srca, dsta. } which seems even More Correct, since the current access_memset path does not check for read watchpoints or faults on all of [src, src+size-1]. r~
Re: [Qemu-devel] [PATCH v2 16/28] s390x/tcg: Fault-safe memmove
On 9/6/19 3:57 AM, David Hildenbrand wrote: > Replace fast_memmove() variants by access_memmove() variants, that > first try to probe access to all affected pages (maximum is two pages). > > In MVCOS, simply always call access_memmove_as() and drop the TODO > about LAP. LAP is already handled in the MMU. > > Get rid of adj_len_to_page(), which is now unused. > > Signed-off-by: David Hildenbrand > --- > target/s390x/mem_helper.c | 204 +- > 1 file changed, 115 insertions(+), 89 deletions(-) Reviewed-by: Richard Henderson r~