[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #55 from Richard Scheffenegger --- Thanks for the core. For stable operation, please use an unpatched kernel, and without net.inet.tcp.rfc6675_pipe=0 The patched cores confirm that during the very final phases, the stack

[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #54 from Michael Tuexen --- (In reply to Dmitriy from comment #53) Thanks. Did you enable options TCPPCAP in the kernel config? It looks like that it is not enabled... -- You are receiving this mail because: You are the

[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #53 from Dmitriy --- (In reply to Richard Scheffenegger from comment #51) Sent the link with in e-mail to: rsch...@freebsd.org tue...@freebsd.org -- You are receiving this mail because: You are the assignee for the bug. You

[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #52 from Richard Scheffenegger --- Created attachment 234683 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=234683=edit more logging extended logging (w/ panic/KASSERT) -- You are receiving this mail because: You

[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #51 from Richard Scheffenegger --- Thanks a lot! Can you provide that core + kernel.debug files? I've extended the logging in a revised patch, if this may be more easy. -- You are receiving this mail because: You are on the

[Bug 264257] [tcp] Panic: Fatal trap 12: page fault while in kernel mode (if_io_tqg_4) - m_copydata ... at /usr/src/sys/kern/uipc_mbuf.c:659

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=264257 --- Comment #50 from Dmitriy --- After applying the patch comment#34 and with options INVARIANTS options INVARIANT_SUPPORT in kernel, system goin to panic in 5-40 minutes (tried 3 times, all the same place), with following trace: Unread

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Actually, i believe its the disabling to HW LRO that makes the difference (i disabled it and rx/tx pause previously). With rx/tx pause on and LRO off i get similar results. The throughput is still very sporadic though. Connecting to host db-01, port 5201 [  5] local 192.168.10.31 port 59055

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Disabling rx/tx pause seems to produce higher peaks. [root@db-02 ~]# iperf3 -i 1 -t 30 -c db-01 Connecting to host db-01, port 5201 [  5] local 192.168.10.31 port 10146 connected to 192.168.10.30 port 5201 [ ID] Interval   Transfer Bitrate Retr  Cwnd [  5]   0.00-1.00  

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Mike Jakubik
Yes, it is the default of 1500. If I set it to 9000 I get some bizarre network behavior. On Tue, 14 Jun 2022 09:45:10 -0400 Andrey V. Elsukov wrote Hi, Do you have the same MTU size on linux machine? Mike Jakubik

Re: Poor performance with stable/13 and Mellanox ConnectX-6 (mlx5)

2022-06-14 Thread Andrey V. Elsukov
13.06.2022 21:25, Mike Jakubik пишет: Hello, I have two new servers with a Mellnox ConnectX-6 card linked at 25Gb/s, however, I am unable to get much more than 6Gb/s when testing with iperf3. The servers are Lenovo SR665 (2 x AMD EPYC 7443 24-Core Processor, 256 GB RAM, Mellanox ConnectX-6

[Bug 261129] IPv6 default route vanishes with rtadvd/rtsold

2022-06-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=261129 --- Comment #16 from Marek Zarychta --- After taking some measures and test, so far I came to following conclusions: 1. The default route gets _silently_ corrupted irregardless of deployed route.algo, with no traces observable neither