Re: Please test: make ipsec(4) timeouts mpsafe

2023-10-17 Thread Hrvoje Popovski
On 17.10.2023. 1:07, Vitaliy Makkoveev wrote:
>> On 13 Oct 2023, at 18:40, Hrvoje Popovski  wrote:
>>
>> On 12.10.2023. 20:10, Vitaliy Makkoveev wrote:
>>> Hi, MP safe process timeouts were landed to the tree, so time to test
>>> them with network stack :) Diff below makes tdb and ids garbage
>>> collector timeout handlers running without kernel lock. Not for commit,
>>> just share this for tests if someone interesting.
>>
>> Hi,
>>
>> with this diff it seems that it's little slower than without it.
>> 165Kpps with diff
>> 200Kpps without diff
>>
> 
> Hi,
> 
> Thanks for testing. I’m interesting on slower results. I suspect
> enabled/disabled POOL_DEBUG effect. The patched and unpatched builds
> were made from the same sources?

Hi,

I'm running same source with and without diff and with
kern.pool_debug=0.
I've tried few times and sometimes I'm getting same results, but mostly
little slower ...
I have 2 same servers directly connected with 10G x540T an one ipsec
tunnel through that 10G interface.
I'm testing like this:
compile kernel from source - test tunnel
apply your diff with same source - test tunnel



Re: Please test: make ipsec(4) timeouts mpsafe

2023-10-16 Thread Vitaliy Makkoveev
> On 13 Oct 2023, at 18:40, Hrvoje Popovski  wrote:
> 
> On 12.10.2023. 20:10, Vitaliy Makkoveev wrote:
>> Hi, MP safe process timeouts were landed to the tree, so time to test
>> them with network stack :) Diff below makes tdb and ids garbage
>> collector timeout handlers running without kernel lock. Not for commit,
>> just share this for tests if someone interesting.
> 
> Hi,
> 
> with this diff it seems that it's little slower than without it.
> 165Kpps with diff
> 200Kpps without diff
> 

Hi,

Thanks for testing. I’m interesting on slower results. I suspect
enabled/disabled POOL_DEBUG effect. The patched and unpatched builds
were made from the same sources?


Re: Please test: make ipsec(4) timeouts mpsafe

2023-10-13 Thread Hrvoje Popovski
On 12.10.2023. 20:10, Vitaliy Makkoveev wrote:
> Hi, MP safe process timeouts were landed to the tree, so time to test
> them with network stack :) Diff below makes tdb and ids garbage
> collector timeout handlers running without kernel lock. Not for commit,
> just share this for tests if someone interesting.

Hi,

with this diff it seems that it's little slower than without it.
165Kpps with diff
200Kpps without diff


test1
ike esp from 10.221.0.0/16 to 10.222.0.0/16 \
local 192.168.1.1 peer 192.168.1.2 \
main auth hmac-sha1 enc aes group modp1024 lifetime 3m \
quick enc aes-128-gcm group modp1024 lifetime 1m \
psk "123"

test2
ike esp from 10.222.0.0/16 to 10.221.0.0/16 \
local 192.168.1.2 peer 192.168.1.1 \
main auth hmac-sha1 enc aes group modp1024 lifetime 3m \
quick enc aes-128-gcm group modp1024 lifetime 1m \
psk "123"

I'm sending random /24 udp traffic from host connected to test1 box
through tunnel to host connected to test2 box ...


test1 - top -SHs1
  PID  TID PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
20980   359894  1400K 1004K sleep/3   netlock   2:26 46.58% softnet3
54870   346439  1400K 1004K sleep/3   netlock   2:24 42.33% softnet4
65020   320085  4200K 1004K onproc/1  - 2:22 41.60% softnet5
 3723   371456  4500K 1004K onproc/5  - 2:22 40.67% softnet1
16879   500721  4300K 1004K onproc/4  - 2:26 39.06% softnet2
 1371   446835  1400K 1004K sleep/2   netlock   0:13  5.37% softnet0



test2 - top -SHs1
  PID  TID PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
61821   455808  1000K 1004K sleep/4   bored 3:02 86.96% softnet0
77299   594039  1000K 1004K sleep/1   bored 0:33 21.63% softnet4






Please test: make ipsec(4) timeouts mpsafe

2023-10-12 Thread Vitaliy Makkoveev
Hi,

MP safe process timeouts were landed to the tree, so time to test them
with network stack :) Diff below makes tdb and ids garbage collector
timeout handlers running without kernel lock.

Not for commit, just share this for tests if someone interesting.

Index: sys/netinet/ip_ipsp.c
===
RCS file: /cvs/src/sys/netinet/ip_ipsp.c,v
retrieving revision 1.277
diff -u -p -r1.277 ip_ipsp.c
--- sys/netinet/ip_ipsp.c   11 Oct 2023 22:13:16 -  1.277
+++ sys/netinet/ip_ipsp.c   12 Oct 2023 18:07:18 -
@@ -124,7 +124,8 @@ void ipsp_ids_gc(void *);
 LIST_HEAD(, ipsec_ids) ipsp_ids_gc_list =
 LIST_HEAD_INITIALIZER(ipsp_ids_gc_list);   /* [F] */
 struct timeout ipsp_ids_gc_timeout =
-TIMEOUT_INITIALIZER_FLAGS(ipsp_ids_gc, NULL, KCLOCK_NONE, TIMEOUT_PROC);
+TIMEOUT_INITIALIZER_FLAGS(ipsp_ids_gc, NULL, KCLOCK_NONE,
+TIMEOUT_PROC | TIMEOUT_MPSAFE);
 
 static inline int ipsp_ids_cmp(const struct ipsec_ids *,
 const struct ipsec_ids *);
@@ -1100,10 +1101,14 @@ tdb_alloc(u_int rdomain)
tdbp->tdb_counters = counters_alloc(tdb_ncounters);
 
/* Initialize timeouts. */
-   timeout_set_proc(>tdb_timer_tmo, tdb_timeout, tdbp);
-   timeout_set_proc(>tdb_first_tmo, tdb_firstuse, tdbp);
-   timeout_set_proc(>tdb_stimer_tmo, tdb_soft_timeout, tdbp);
-   timeout_set_proc(>tdb_sfirst_tmo, tdb_soft_firstuse, tdbp);
+   timeout_set_flags(>tdb_timer_tmo, tdb_timeout, tdbp,
+   KCLOCK_NONE, TIMEOUT_PROC | TIMEOUT_MPSAFE);
+   timeout_set_flags(>tdb_first_tmo, tdb_firstuse, tdbp,
+   KCLOCK_NONE, TIMEOUT_PROC | TIMEOUT_MPSAFE);
+   timeout_set_flags(>tdb_stimer_tmo, tdb_soft_timeout, tdbp,
+   KCLOCK_NONE, TIMEOUT_PROC | TIMEOUT_MPSAFE);
+   timeout_set_flags(>tdb_sfirst_tmo, tdb_soft_firstuse, tdbp,
+   KCLOCK_NONE, TIMEOUT_PROC | TIMEOUT_MPSAFE);
 
return tdbp;
 }