Re: svn commit: r357805 - head/sys/amd64/include

2020-02-12 Thread Mateusz Guzik
On 2/12/20, Gleb Smirnoff  wrote:
> On Wed, Feb 12, 2020 at 11:12:14AM +, Mateusz Guzik wrote:
> M> Author: mjg
> M> Date: Wed Feb 12 11:12:13 2020
> M> New Revision: 357805
> M> URL: https://svnweb.freebsd.org/changeset/base/357805
> M>
> M> Log:
> M>   amd64: store per-cpu allocations subtracted by __pcpu
> M>
> M>   This eliminates a runtime subtraction from counter_u64_add.
> M>
> M>   before:
> M>   mov0x4f00ed(%rip),%rax# 0x80c01788
> 
> M>   sub0x808ff6(%rip),%rax# 0x80f1a698 <__pcpu>
> M>   addq   $0x1,%gs:(%rax)
> M>
> M>   after:
> M>   mov0x4f02fd(%rip),%rax# 0x80c01788
> 
> M>   addq   $0x1,%gs:(%rax)
> M>
> M>   Reviewed by: jeff
> M>   Differential Revision:   https://reviews.freebsd.org/D23570
>
> Neat optimization! Thanks. Why didn't we do it back when created counter?
>

Don't look at me, I did not work on it.

You can top it for counters like the above -- most actual counters are
known to be there at compilatin time and they never disappear. Meaning
that in the simplest case they can just be a part of one big array in
struct pcpu. Then assembly could resort to addq $0x1,%gs:(someoffset)
removing the mov loading the address -- faster single threaded and less
cache use.

I'm confident I noted this at least few times.

-- 
Mateusz Guzik 
___
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"


Re: svn commit: r357805 - head/sys/amd64/include

2020-02-12 Thread Gleb Smirnoff
On Wed, Feb 12, 2020 at 11:12:14AM +, Mateusz Guzik wrote:
M> Author: mjg
M> Date: Wed Feb 12 11:12:13 2020
M> New Revision: 357805
M> URL: https://svnweb.freebsd.org/changeset/base/357805
M> 
M> Log:
M>   amd64: store per-cpu allocations subtracted by __pcpu
M>   
M>   This eliminates a runtime subtraction from counter_u64_add.
M>   
M>   before:
M>   mov0x4f00ed(%rip),%rax# 0x80c01788 
M>   sub0x808ff6(%rip),%rax# 0x80f1a698 <__pcpu>
M>   addq   $0x1,%gs:(%rax)
M>   
M>   after:
M>   mov0x4f02fd(%rip),%rax# 0x80c01788 
M>   addq   $0x1,%gs:(%rax)
M>   
M>   Reviewed by:   jeff
M>   Differential Revision: https://reviews.freebsd.org/D23570

Neat optimization! Thanks. Why didn't we do it back when created counter?

-- 
Gleb Smirnoff
___
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"


svn commit: r357805 - head/sys/amd64/include

2020-02-12 Thread Mateusz Guzik
Author: mjg
Date: Wed Feb 12 11:12:13 2020
New Revision: 357805
URL: https://svnweb.freebsd.org/changeset/base/357805

Log:
  amd64: store per-cpu allocations subtracted by __pcpu
  
  This eliminates a runtime subtraction from counter_u64_add.
  
  before:
  mov0x4f00ed(%rip),%rax# 0x80c01788 
  sub0x808ff6(%rip),%rax# 0x80f1a698 <__pcpu>
  addq   $0x1,%gs:(%rax)
  
  after:
  mov0x4f02fd(%rip),%rax# 0x80c01788 
  addq   $0x1,%gs:(%rax)
  
  Reviewed by:  jeff
  Differential Revision:https://reviews.freebsd.org/D23570

Modified:
  head/sys/amd64/include/counter.h
  head/sys/amd64/include/pcpu.h

Modified: head/sys/amd64/include/counter.h
==
--- head/sys/amd64/include/counter.hWed Feb 12 11:11:22 2020
(r357804)
+++ head/sys/amd64/include/counter.hWed Feb 12 11:12:13 2020
(r357805)
@@ -33,7 +33,7 @@
 
 #include 
 
-#defineEARLY_COUNTER   _bsp_pcpu.pc_early_dummy_counter
+#defineEARLY_COUNTER   (void *)__offsetof(struct pcpu, 
pc_early_dummy_counter)
 
 #definecounter_enter() do {} while (0)
 #definecounter_exit()  do {} while (0)
@@ -43,6 +43,7 @@ static inline uint64_t
 counter_u64_read_one(counter_u64_t c, int cpu)
 {
 
+   MPASS(c != EARLY_COUNTER);
return (*zpcpu_get_cpu(c, cpu));
 }
 
@@ -65,6 +66,7 @@ counter_u64_zero_one_cpu(void *arg)
counter_u64_t c;
 
c = arg;
+   MPASS(c != EARLY_COUNTER);
*(zpcpu_get(c)) = 0;
 }
 
@@ -86,7 +88,7 @@ counter_u64_add(counter_u64_t c, int64_t inc)
KASSERT(IS_BSP() || c != EARLY_COUNTER, ("EARLY_COUNTER used on AP"));
__asm __volatile("addq\t%1,%%gs:(%0)"
:
-   : "r" ((char *)c - (char *)&__pcpu[0]), "ri" (inc)
+   : "r" (c), "ri" (inc)
: "memory", "cc");
 }
 

Modified: head/sys/amd64/include/pcpu.h
==
--- head/sys/amd64/include/pcpu.h   Wed Feb 12 11:11:22 2020
(r357804)
+++ head/sys/amd64/include/pcpu.h   Wed Feb 12 11:12:13 2020
(r357805)
@@ -240,6 +240,10 @@ _Static_assert(sizeof(struct monitorbuf) == 128, "2x c
 
 #defineIS_BSP()(PCPU_GET(cpuid) == 0)
 
+#define zpcpu_offset_cpu(cpu)  ((uintptr_t)&__pcpu[0] + UMA_PCPU_ALLOC_SIZE * 
cpu)
+#define zpcpu_base_to_offset(base) (void *)((uintptr_t)(base) - 
(uintptr_t)&__pcpu[0])
+#define zpcpu_offset_to_base(base) (void *)((uintptr_t)(base) + 
(uintptr_t)&__pcpu[0])
+
 #else /* !__GNUCLIKE_ASM || !__GNUCLIKE___TYPEOF */
 
 #error "this file needs to be ported to your compiler"
___
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"