On 08/10/2025 16.14, Philippe Mathieu-Daudé wrote:
If a address_space_rw() call ever fails, break the loop and
return the PGM_ADDRESSING error (after triggerring an access
exception).

Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
---
  target/s390x/mmu_helper.c | 10 ++++++++--
  1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
index 22d3d4a97df..3b1e75f7833 100644
--- a/target/s390x/mmu_helper.c
+++ b/target/s390x/mmu_helper.c
@@ -546,9 +546,15 @@ int s390_cpu_virt_mem_rw(S390CPU *cpu, vaddr laddr, 
uint8_t ar, void *hostbuf,
/* Copy data by stepping through the area page by page */
          for (i = 0; i < nr_pages; i++) {
+            MemTxResult res;
+
              currlen = MIN(len, TARGET_PAGE_SIZE - (laddr % TARGET_PAGE_SIZE));
-            address_space_rw(as, pages[i] | (laddr & ~TARGET_PAGE_MASK),
-                             attrs, hostbuf, currlen, is_write);
+            res = address_space_rw(as, pages[i] | (laddr & ~TARGET_PAGE_MASK),
+                                   attrs, hostbuf, currlen, is_write);
+            if (res != MEMTX_OK) {
+                ret = PGM_ADDRESSING;
+                break;
+            }
              laddr += currlen;
              hostbuf += currlen;
              len -= currlen;

Reviewed-by: Thomas Huth <[email protected]>


Reply via email to