Author: andrew
Date: Mon Aug 21 18:12:32 2017
New Revision: 322769
URL: https://svnweb.freebsd.org/changeset/base/322769

Log:
  Improve the performance of the arm64 thread switching code.
  
  The full system memory barrier around a TLB invalidation is stricter than
  required. It needs to wait on accesses to main memory, with just the weaker
  store variant before the invalidate. As such use the dsb istst, tlbi, dlb
  ish sequence already used in pmap.
  
  The tlbi instruction in this sequence is also unnecessarily using a
  broadcast invalidate when it just needs to invalidate the local CPUs TLB.
  Switch to a non-broadcast variant of this instruction.
  
  Sponsored by: DARPA, AFRL

Modified:
  head/sys/arm64/arm64/swtch.S

Modified: head/sys/arm64/arm64/swtch.S
==============================================================================
--- head/sys/arm64/arm64/swtch.S        Mon Aug 21 18:00:26 2017        
(r322768)
+++ head/sys/arm64/arm64/swtch.S        Mon Aug 21 18:12:32 2017        
(r322769)
@@ -91,9 +91,9 @@ ENTRY(cpu_throw)
        isb
 
        /* Invalidate the TLB */
-       dsb     sy
-       tlbi    vmalle1is
-       dsb     sy
+       dsb     ishst
+       tlbi    vmalle1
+       dsb     ish
        isb
 
        /* If we are single stepping, enable it */
@@ -192,9 +192,9 @@ ENTRY(cpu_switch)
        isb
 
        /* Invalidate the TLB */
-       dsb     sy
-       tlbi    vmalle1is
-       dsb     sy
+       dsb     ishst
+       tlbi    vmalle1
+       dsb     ish
        isb
 
        /*
_______________________________________________
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to