This is an automated email from Gerrit.

Antonio Borneo (borneo.anto...@gmail.com) just uploaded a new patch set to 
Gerrit, which you can find at http://openocd.zylin.com/5138

-- gerrit

commit 11fd7c1926c9e38870ea1218954134c802184aa2
Author: Antonio Borneo <borneo.anto...@gmail.com>
Date:   Sat Apr 27 15:52:52 2019 +0200

    target/cortex_a: use aligned accesses for read/write cpu memory slow
    
    Armv7a is able to read and write memory at un-aligned address, but
    only when bit SCTLR.A (Alignment check enable) is zero and the
    address belongs to a memory space with attribute "Normal" (see [1]
    chapter A3.2.1 "Unaligned data access"). In all the other cases
    the memory access will trigger an alignment fault data abort
    exception.
    Memory attributes are explained in [1] chapter A3.5 "Memory types
    and attributes and the memory order model".
    
    Disabling the MMU cause a change in memory attribute, as explained
    in [1] chapter B3.2 "The effects of disabling MMUs on VMSA
    behavior".
    This can cause several issues. e.g. a SW breakpoint on un-aligned
    4-byte Thumb instruction, set when MMU is on, can be impossible to
    remove when MMU turns off.
    
    While is possible to check all the possible conditions before an
    un-aligned memory access, it's clearly more maintainable to skip
    such complexity and only perform aligned accesses.
    
    Check the alignment and eventually modify the data size before
    calling the functions cortex_a_{read,write}_cpu_memory_slow().
    Change the comment in the two functions above to comply with the
    new behaviour.
    
    [1] ARM DDI 0406C.d - "ARM Architecture Reference Manual, ARMv7-A
        and ARMv7-R edition"
    
    Change-Id: I57b4c11e7fa7e78aaaaee4406a5734b48db740ae
    Signed-off-by: Antonio Borneo <borneo.anto...@gmail.com>

diff --git a/src/target/cortex_a.c b/src/target/cortex_a.c
index 8d0e100..27eb3b0 100644
--- a/src/target/cortex_a.c
+++ b/src/target/cortex_a.c
@@ -1892,7 +1892,8 @@ static int cortex_a_write_cpu_memory_slow(struct target 
*target,
 {
        /* Writes count objects of size size from *buffer. Old value of DSCR 
must
         * be in *dscr; updated to new value. This is slow because it works for
-        * non-word-sized objects and (maybe) unaligned accesses. If size == 4 
and
+        * non-word-sized objects. Avoid unaligned accesses as they do not work
+        * on memory address space without "Normal" attribute. If size == 4 and
         * the address is aligned, cortex_a_write_cpu_memory_fast should be
         * preferred.
         * Preconditions:
@@ -2049,7 +2050,22 @@ static int cortex_a_write_cpu_memory(struct target 
*target,
                /* We are doing a word-aligned transfer, so use fast mode. */
                retval = cortex_a_write_cpu_memory_fast(target, count, buffer, 
&dscr);
        } else {
-               /* Use slow path. */
+               /* Use slow path. Adjust size for aligned accesses */
+               switch (address % 4) {
+                       case 1:
+                       case 3:
+                               count *= size;
+                               size = 1;
+                               break;
+                       case 2:
+                               if (size == 4) {
+                                       count *= 2;
+                                       size = 2;
+                               }
+                       case 0:
+                       default:
+                               break;
+               }
                retval = cortex_a_write_cpu_memory_slow(target, size, count, 
buffer, &dscr);
        }
 
@@ -2135,7 +2151,8 @@ static int cortex_a_read_cpu_memory_slow(struct target 
*target,
 {
        /* Reads count objects of size size into *buffer. Old value of DSCR 
must be
         * in *dscr; updated to new value. This is slow because it works for
-        * non-word-sized objects and (maybe) unaligned accesses. If size == 4 
and
+        * non-word-sized objects. Avoid unaligned accesses as they do not work
+        * on memory address space without "Normal" attribute. If size == 4 and
         * the address is aligned, cortex_a_read_cpu_memory_fast should be
         * preferred.
         * Preconditions:
@@ -2351,7 +2368,23 @@ static int cortex_a_read_cpu_memory(struct target 
*target,
                /* We are doing a word-aligned transfer, so use fast mode. */
                retval = cortex_a_read_cpu_memory_fast(target, count, buffer, 
&dscr);
        } else {
-               /* Use slow path. */
+               /* Use slow path. Adjust size for aligned accesses */
+               switch (address % 4) {
+                       case 1:
+                       case 3:
+                               count *= size;
+                               size = 1;
+                               break;
+                       case 2:
+                               if (size == 4) {
+                                       count *= 2;
+                                       size = 2;
+                               }
+                               break;
+                       case 0:
+                       default:
+                               break;
+               }
                retval = cortex_a_read_cpu_memory_slow(target, size, count, 
buffer, &dscr);
        }
 

-- 


_______________________________________________
OpenOCD-devel mailing list
OpenOCD-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openocd-devel

Reply via email to