Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 16:57, Theo de Raadt wrote: > Claudio Jeker wrote: > >> For ospfd tests you want to make sure that some of the ospf packets need >> fragmenting. So this needs a sizeable network to hit this. > > Yes, as I remember, the problems only related to fragments of large > packets. > >

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 16:53, Claudio Jeker wrote: > On Fri, Oct 22, 2021 at 04:45:09PM +0200, Hrvoje Popovski wrote: >> On 22.10.2021. 16:09, Florian Obser wrote: >>> >>> >>> On 22 October 2021 13:55:20 CEST, Stuart Henderson >>> wrote: >>>> On

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 16:22, Sebastian Benoit wrote: > Stuart Henderson(s...@spacehopper.org) on 2021.10.22 12:55:20 +0100: >> On 2021/10/22 11:25, Jan Klemkow wrote: >>> this diff add hardware checksum offloading for the receive path of >>> ixl(4) interfaces. >> >> Would be good to have this tested with

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 16:09, Florian Obser wrote: > > > On 22 October 2021 13:55:20 CEST, Stuart Henderson > wrote: >> On 2021/10/22 11:25, Jan Klemkow wrote: >>> this diff add hardware checksum offloading for the receive path of >>> ixl(4) interfaces. >> >> Would be good to have this tested with NFS

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 13:55, Stuart Henderson wrote: > On 2021/10/22 11:25, Jan Klemkow wrote: >> this diff add hardware checksum offloading for the receive path of >> ixl(4) interfaces. > > Would be good to have this tested with NFS if anyone has a way to do so. > nics are probably better now but I'm

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 13:39, Jan Klemkow wrote: > Hi Hrvoje, > > Thats because, you only see this flags, if the checksum offloading is > enabled for "sending". I'm still working/debugging on the sending side. > Thus, I just send a diff with the receiving part for now. > > You can see if its working

Re: ixl(4): add checksum receive offloading

2021-10-22 Thread Hrvoje Popovski
On 22.10.2021. 11:25, Jan Klemkow wrote: > Hi, > > this diff add hardware checksum offloading for the receive path of > ixl(4) interfaces. > > Tested on: > ixl1 at pci3 dev 0 function 1 "Intel X710 SFP+" rev 0x02: port 1, FW > 6.0.48442 API 1.7, msix, 8 queues, address 40:a6:b7:02:38:3d > >

Re: iwx(4) 40MHz channel support

2021-10-14 Thread Hrvoje Popovski
On 12.10.2021. 16:29, Hrvoje Popovski wrote: > On 12.10.2021. 14:47, Stefan Sperling wrote: >> This patch adds support for 40MHz channels to iwx(4). >> >> Please sync your source tree before attempting to apply this patch. >> I have committed some changes to this dr

Re: iwx(4) 40MHz channel support

2021-10-12 Thread Hrvoje Popovski
On 12.10.2021. 14:47, Stefan Sperling wrote: > This patch adds support for 40MHz channels to iwx(4). > > Please sync your source tree before attempting to apply this patch. > I have committed some changes to this driver today which this patch > is based on. > > Works for me on AX200/AX201. Does

Re: WIP iwx(4) Tx aggregation

2021-07-29 Thread Hrvoje Popovski
On 29.7.2021. 18:05, Stefan Sperling wrote: > This is an updated patch which has been rebased on top of -current. > Make sure that your tree is fully synced up to r1.86 of if_iwx.c before > applying this patch. > > This patch includes the small change to ieee80211_input.c I sent here: >

Re: forwarding in parallel ipsec workaround

2021-07-23 Thread Hrvoje Popovski
On 23.7.2021. 16:20, Vitaliy Makkoveev wrote: > On Thu, Jul 22, 2021 at 11:30:02PM +0200, Hrvoje Popovski wrote: >> On 22.7.2021. 22:52, Vitaliy Makkoveev wrote: >>> On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote: >>>> On 22.7.2021. 12:21, Hrvoje P

Re: forwarding in parallel ipsec workaround

2021-07-22 Thread Hrvoje Popovski
On 22.7.2021. 22:52, Vitaliy Makkoveev wrote: > On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote: >> On 22.7.2021. 12:21, Hrvoje Popovski wrote: >>> Thank you for explanation.. >>> >>> after hitting box all night, box panic and i was able to r

Re: forwarding in parallel ipsec workaround

2021-07-22 Thread Hrvoje Popovski
On 22.7.2021. 12:21, Hrvoje Popovski wrote: > Thank you for explanation.. > > after hitting box all night, box panic and i was able to reproduce it > this morning ... I'm not sure but box panic after hour or more of > sending traffic through iked tunnel .. > I will try to re

Re: forwarding in parallel ipsec workaround

2021-07-22 Thread Hrvoje Popovski
On 22.7.2021. 0:39, Alexander Bluhm wrote: > On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote: >> I'm combining this and last parallel diff and i can't see any drops in >> traffic. Even sending at high rate, traffic through iked or isakmpd is >> stable at 1

Re: forwarding in parallel ipsec workaround

2021-07-21 Thread Hrvoje Popovski
On 21.7.2021. 22:21, Alexander Bluhm wrote: > Ahh, to many diffs in my tree. I have forgotten the cunk > crp->crp_flags = ... | CRYPTO_F_NOQUEUE > > Try this. Still testing it myself, it looks a bit faster. I'm combining this and last parallel diff and i can't see any drops in traffic. Even

Re: forwarding in parallel ipsec workaround

2021-07-21 Thread Hrvoje Popovski
On 21.7.2021. 18:41, Alexander Bluhm wrote: > On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote: >> Hi, pipex(4) is also not ready for parallel access. In the chunk below >> it will be accessed through (*ifp->if_input)() -> ether_input() -> >> pipex_pppoe_input(). This looks not

Re: forwarding in parallel ipsec workaround

2021-07-19 Thread Hrvoje Popovski
On 19.7.2021. 17:53, Alexander Bluhm wrote: > Hi, > > I found why the IPsec workaround did not work. > > At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index + > idx), but the workaround modifies net_tq() at runtime. Modifying > net_tq() at runtime is bad anyway as task_add() and

Re: WIP iwx(4) Tx aggregation

2021-07-13 Thread Hrvoje Popovski
On 30.6.2021. 13:28, Stefan Sperling wrote: > On Mon, Jun 21, 2021 at 08:37:11PM +0200, Stefan Sperling wrote: >> This patch attempts to implement Tx aggregation support for iwx(4). >> >> It is not yet ready to be committed because of outstanding problems: >> >> - Under load the firmware throws a

Re: forwarding in parallel with ipsec panic

2021-07-08 Thread Hrvoje Popovski
On 8.7.2021. 0:10, Vitaliy Makkoveev wrote: > On Wed, Jul 07, 2021 at 11:07:08PM +0200, Hrvoje Popovski wrote: >> On 7.7.2021. 22:36, Vitaliy Makkoveev wrote: >>> Thanks. ipsp_spd_lookup() stopped panic in pool_get(9). >>> >>> I guess the panics continu

Re: forwarding in parallel with ipsec panic

2021-07-07 Thread Hrvoje Popovski
On 7.7.2021. 22:36, Vitaliy Makkoveev wrote: > Thanks. ipsp_spd_lookup() stopped panic in pool_get(9). > > I guess the panics continue because simultaneous modifications of > 'tdbp->tdb_policy_head' break it. Could you try the diff below? It > introduces `tdb_polhd_mtx' mutex(9) and uses it to

Re: forwarding in parallel with ipsec panic

2021-07-07 Thread Hrvoje Popovski
On 7.7.2021. 19:38, Vitaliy Makkoveev wrote: > Hi, > > It seems the first the first panic occured because ipsp_spd_lookup() > modifies tdbp->tdb_policy_head and simultaneous execution breaks it. > I guess at least mutex(9) should be used to protect `tdb_policy_head'. > > The second panic occured

Re: forwarding in parallel with ipsec panic

2021-07-07 Thread Hrvoje Popovski
On 7.7.2021. 12:46, Hrvoje Popovski wrote: > Panic can be triggered when i have parallel diff and sending traffic > over ipsec tunnel and on other side while traffic is flowing i'm > restarting isakmpd daemon and while negotiating ipsec doing ifconfig ix1 > down && ifconfig

forwarding in parallel with ipsec panic

2021-07-07 Thread Hrvoje Popovski
Hi, i don't want to pollute bluhm@ parallel forwarding mail on tech@ so i'm sending this report as a separate thread. this panic it's dependent on bluhm@ parallel diff ... and i found it yesterday I'm having ipsec tunnel between two hosts without pf and i'm sending traffic over that tunnel ..

Re: ifnewlladdr spl

2021-06-29 Thread Hrvoje Popovski
On 29.6.2021. 23:05, Alexander Bluhm wrote: > On Tue, Jun 29, 2021 at 10:39:14PM +0200, Hrvoje Popovski wrote: >> with this diff without any traffic through aggr if i destroy aggr >> interface i'm getting log below ... log can't be reproduced after first >> destroy.. y

Re: ifnewlladdr spl

2021-06-29 Thread Hrvoje Popovski
On 29.6.2021. 19:19, Alexander Bluhm wrote: > So what to do with this diff? > > - OK to commit? > - Test it in snaps? > - Call for testers? > > I it would be interesting if the kernel is stable when trunk or > aggr interfaces are created or destroyed while the machine is under > network load.

Re: [External] : Re: if_etherbridge.c vs. parallel forwarding

2021-06-25 Thread Hrvoje Popovski
On 25.6.2021. 10:02, Alexandr Nedvedicky wrote: > Hello David, > > >> >> during the drive to work it occurred to me that we should basically have >> the same logic around whether we should insert or replace or do nothing >> in both the smr and mutex critical sections. >> >> it at least makes the

Re: limit MSR_INT_PEN_MSG use to < family 16h

2021-06-10 Thread Hrvoje Popovski
On 10.6.2021. 8:17, Jonathan Gray wrote: > On Wed, Jun 09, 2021 at 10:35:48PM -0700, Mike Larkin wrote: >> On Thu, Jun 10, 2021 at 03:19:43PM +1000, Jonathan Gray wrote: >>> Ilya Voronin sent a diff to misc to limit MSR_INT_PEN_MSG use to >>> < AMD family 17h prompted by a problem with an AWS t3a

Re: [External] : Re: parallel forwarding vs. bridges

2021-06-07 Thread Hrvoje Popovski
On 7.6.2021. 9:25, Alexandr Nedvedicky wrote: > Hello, > > On Sun, Jun 06, 2021 at 09:54:50PM +0200, Hrvoje Popovski wrote: > >> >> this one? > yes, exactly this one. Ok, great .. this and similar panics are products of accidental stp loops. I have two same

Re: [External] : Re: parallel forwarding vs. bridges

2021-06-06 Thread Hrvoje Popovski
On 5.6.2021. 18:43, Alexandr Nedvedicky wrote: > According to early tests it works well. Currently there is a one > mysterious panic, which Hrvoje may be able to comment on > how to trigger it. The stack looks as follows: > > kernel diagnostic assertion "smr->smr_func == NULL"

Re: move copyout() in DIOCGETSTATES outside of NET_LOCK() and state_lcok

2021-05-20 Thread Hrvoje Popovski
On 20.5.2021. 3:23, Alexandr Nedvedicky wrote: > Hello, > > Hrvoje gave a try to experimental diff, which trades rw-locks in pf(4) > for mutexes [1]. Hrvoje soon discovered machine panics, when doing 'pfctl -ss' > The callstack looks as follows: > > panic: acquiring blockable sleep lock with

Re: parallel forwarding vs. bridges

2021-05-17 Thread Hrvoje Popovski
On 17.5.2021. 16:24, Alexandr Nedvedicky wrote: > Hrvoje, > > managed to trigger diagnostic panics with diff [1] sent by bluhm@ > some time ago. The panic Hrvoje sees comes from ether_input() here: > [1] https://marc.info/?l=openbsd-tech=161903387904923=2 > > [2]

Re: reposync error

2021-05-17 Thread Hrvoje Popovski
On 17.5.2021. 13:55, Theo Buehler wrote: > On Mon, May 17, 2021 at 11:24:25AM +0200, Hrvoje Popovski wrote: >> Hi all, >> >> today after sysupgrade i' getting this error with reposync >> >> r620-1# su -m cvs -c "reposync -s src rsync://ftp.hostserver.de/cvs

reposync error

2021-05-17 Thread Hrvoje Popovski
Hi all, today after sysupgrade i' getting this error with reposync r620-1# su -m cvs -c "reposync -s src rsync://ftp.hostserver.de/cvs /home/cvs" reposync: rsync error: rsync: did not see server greeting rsync error: error starting client-server protocol (code 5) at main.c(1814) [Receiver=3.2.3]

Re: running network stack forwarding in parallel

2021-05-13 Thread Hrvoje Popovski
On 13.5.2021. 1:25, Vitaliy Makkoveev wrote: > It seems this lock order issue is not parallel diff specific. Yes, you are right ... it seemed familiar but i couldn't reproduce it on lapc trunk or without this diff so i thought that parallel diff is one to blame .. sorry for noise ..

Re: running network stack forwarding in parallel

2021-05-12 Thread Hrvoje Popovski
On 21.4.2021. 21:36, Alexander Bluhm wrote: > We need more MP preassure to find such bugs and races. I think now > is a good time to give this diff broader testing and commit it. > You need interfaces with multiple queues to see a difference. Hi, while forwarding ip4 traffic over box with

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 12:15, Raf Czlonka wrote: > On Tue, May 04, 2021 at 10:55:37AM BST, Stefan Sperling wrote: >> On Tue, May 04, 2021 at 11:47:43AM +0200, Hrvoje Popovski wrote: >>> I'm not sure that with iwx and eduroam, sysupgrade can finish. Maybe i >>> ne

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 11:55, Stefan Sperling wrote: > On Tue, May 04, 2021 at 11:47:43AM +0200, Hrvoje Popovski wrote: >> I'm not sure that with iwx and eduroam, sysupgrade can finish. Maybe i >> need to wait longer, will try that ... sysupgrade will finish if iwx is >> disabled or hos

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 11:44, Stefan Sperling wrote: > On Tue, May 04, 2021 at 11:36:01AM +0200, Hrvoje Popovski wrote: >> On 4.5.2021. 11:02, Hrvoje Popovski wrote: >>> i've disabled and stopped wpa_supplicant and reboot laptop and iwx0 >>> didn't get ip but laptop did boot norm

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 11:38, Stefan Sperling wrote: > On Tue, May 04, 2021 at 11:02:49AM +0200, Hrvoje Popovski wrote: >> On 4.5.2021. 10:47, Stefan Sperling wrote: >>> On Tue, May 04, 2021 at 10:32:02AM +0200, Hrvoje Popovski wrote: >>>> Hi all, >>>> >>&

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 11:02, Hrvoje Popovski wrote: > On 4.5.2021. 10:47, Stefan Sperling wrote: >> On Tue, May 04, 2021 at 10:32:02AM +0200, Hrvoje Popovski wrote: >>> Hi all, >>> >>> today i tried to do sysupgrade and it wouldn't finish because of iwx

Re: iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
On 4.5.2021. 10:47, Stefan Sperling wrote: > On Tue, May 04, 2021 at 10:32:02AM +0200, Hrvoje Popovski wrote: >> Hi all, >> >> today i tried to do sysupgrade and it wouldn't finish because of iwx errors. >> iwx is working just fine with with snapshots, even with eduroam

iwx and sysupgrade

2021-05-04 Thread Hrvoje Popovski
Hi all, today i tried to do sysupgrade and it wouldn't finish because of iwx errors. iwx is working just fine with with snapshots, even with eduroam :) e14gen2# cat /etc/hostname.iwx0 debug join "eduroam" wpa wpaakms 802.1x autoconf e14gen2# ifconfig iwx0 iwx0: flags=808847 mtu 1500

Re: [External] : Re: running network stack forwarding in parallel

2021-04-30 Thread Hrvoje Popovski
On 22.4.2021. 13:08, Hrvoje Popovski wrote: > On 22.4.2021. 12:38, Alexander Bluhm wrote: >> It is not clear why the lock helps. Is it a bug in routing or ARP? >> Or is it just different timing introduced by the additional kernel >> lock? The parallel network task

Re: pf_state_key_link_reverse() is prone to race on parallel forwarding

2021-04-22 Thread Hrvoje Popovski
On 21.4.2021. 22:19, Alexandr Nedvedicky wrote: > Hello, > > people who will be running pf(4) with bluhm's diff [1], may trip > one of the asserts triggered by pf_state_key_link_reverse() here: > > 7366 void > 7367 pf_state_key_link_reverse(struct pf_state_key *sk, struct pf_state_key > *skrev)

Re: [External] : Re: running network stack forwarding in parallel

2021-04-22 Thread Hrvoje Popovski
On 22.4.2021. 13:42, Mark Kettenis wrote: >> Date: Thu, 22 Apr 2021 13:09:34 +0200 >> From: Alexander Bluhm >> >> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote: >>> r620-1# papnpaiancini:cc :p :op >>> opooolo_llc_ac_caccahhceh_ei_eti_ti

Re: [External] : Re: running network stack forwarding in parallel

2021-04-22 Thread Hrvoje Popovski
On 22.4.2021. 12:38, Alexander Bluhm wrote: > It is not clear why the lock helps. Is it a bug in routing or ARP? > Or is it just different timing introduced by the additional kernel > lock? The parallel network task stress the subsystems of the kernel > more than before with MP load. Having

Re: [External] : Re: running network stack forwarding in parallel

2021-04-22 Thread Hrvoje Popovski
On 22.4.2021. 11:36, Hrvoje Popovski wrote: > if you want i'll try to reproduce in on other boxes.. > maybe i can trigger it here easily because of 2 sockets ? on second box with 6 x Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz, 3600.02 MHz.. r620-1# papnpaiancini:cc

Re: [External] : Re: running network stack forwarding in parallel

2021-04-22 Thread Hrvoje Popovski
On 22.4.2021. 11:02, Alexander Bluhm wrote: > On Thu, Apr 22, 2021 at 09:03:22AM +0200, Hrvoje Popovski wrote: >> something like this: >> >> x3550m4# pappnaiannc:iicc :p:o ppoolo_oolcla__ddcohoe__gg_eiettt::e m >> _mmcbmualg2fkpilc2_:: chppeaag >> gceke: ee mm

Re: [External] : Re: running network stack forwarding in parallel

2021-04-22 Thread Hrvoje Popovski
On 22.4.2021. 1:10, Hrvoje Popovski wrote: > On 22.4.2021. 0:31, Alexandr Nedvedicky wrote: >> Hello, >> >> >>>> Hi, >>>> >>>> with this diff i'm getting panic when i'm pushing traffic over that box. >>>> This is

Re: [External] : Re: running network stack forwarding in parallel

2021-04-21 Thread Hrvoje Popovski
On 22.4.2021. 0:31, Alexandr Nedvedicky wrote: > Hello, > > >>> Hi, >>> >>> with this diff i'm getting panic when i'm pushing traffic over that box. >>> This is plain forwarding. To compile with witness ? >> >> >> with witness >> > > any chance to check other CPUs to see what code they are

Re: running network stack forwarding in parallel

2021-04-21 Thread Hrvoje Popovski
On 22.4.2021. 0:26, Alexander Bluhm wrote: > On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote: >> with this diff i'm getting panic when i'm pushing traffic over that box. > > Thanks for testing. > >> I'm sending traffic from host connected on ix0 from addr

Re: running network stack forwarding in parallel

2021-04-21 Thread Hrvoje Popovski
On 21.4.2021. 23:28, Hrvoje Popovski wrote: > On 21.4.2021. 21:36, Alexander Bluhm wrote: >> Hi, >> >> For a while we are running network without kernel lock, but with a >> network lock. The latter is an exclusive sleeping rwlock. >> >> It is possible

Re: running network stack forwarding in parallel

2021-04-21 Thread Hrvoje Popovski
On 21.4.2021. 21:36, Alexander Bluhm wrote: > Hi, > > For a while we are running network without kernel lock, but with a > network lock. The latter is an exclusive sleeping rwlock. > > It is possible to run the forwarding path in parallel on multiple > cores. I use ix(4) interfaces which

Re: use 64bit ethernet addresses in carp(4)

2021-03-05 Thread Hrvoje Popovski
On 5.3.2021. 6:11, David Gwynne wrote: > this passes the destination ethernet address from the network packet > as a uint64_t from ether_input into carp_input, so it can use it > to see if a carp interface should take the packet. > > it's been working on amd64 and sparc64. anyone else want to

pkg_add quirks log in snapshot

2021-02-24 Thread Hrvoje Popovski
Hi all, i'm getting this log after update to latest snapshot pkg_add -ui quirks-3.580 signed on 2021-02-24T18:23:18Z |No change in quirks-3.580String found where operator expected at /usr/local/libdata/perl5/site_perl/OpenBSD/Quirks.pm line 2196, near ""Upstrem moved to unversioned tarballs,

Re: switch(4): fix netlock assertion within ifpromisc()

2021-02-19 Thread Hrvoje Popovski
On 19.2.2021. 21:50, Vitaliy Makkoveev wrote: > As it was reported [1] switch(4) triggers NET_ASSERT_LOCKED() while > we perform ifconfig(8) destroy. ifpromisc() requires netlock to be held. > This is true while switch_port_detach() and underlay ifpromisc() called > through switch_ioctl(). But

Re: i386 pmap diff

2020-12-23 Thread Hrvoje Popovski
On 23.12.2020. 18:24, Mark Kettenis wrote: > Diff below switches the i386 pmap to use the modern km_alloc(9) > functions and uses IPL_VM for the pmap pool, following the example of > amd64. > > Don't have easy access to an i386 machine right now, so this has only > been compile tested. > > ok

Re: Kernel panic with i386 on latest snapshot

2020-12-15 Thread Hrvoje Popovski
On 15.12.2020. 18:57, Mark Kettenis wrote: >> From: jungle Boogie >> Date: Tue, 15 Dec 2020 08:07:04 -0800 >> >> Hi All, >> >> On my i386 Toshiba netbook machine, I am getting a kernel panic with >> the latest i386 snapshot. >> >> I hope this information helps someone with the issue. >> >>> show

Re: Kernel panic with i386 on latest snapshot

2020-12-15 Thread Hrvoje Popovski
On 15.12.2020. 17:07, jungle Boogie wrote: > Hi All, > > On my i386 Toshiba netbook machine, I am getting a kernel panic with > the latest i386 snapshot. > > I hope this information helps someone with the issue. > >> show panic > kernel diagnostic assertion "_kernel_lock_held()" failed: >

Re: timekeep: fixing large skews on amd64 with RDTSCP

2020-08-23 Thread Hrvoje Popovski
On 23.8.2020. 16:50, Claudio Jeker wrote: > On Sun, Aug 23, 2020 at 04:06:01PM +0200, Christian Weisgerber wrote: >> Scott Cheloha: >> >>> This "it might slow down the network stack" thing keeps coming up, and >>> yet nobody can point to (a) who expressed this concern or (b) what the >>> penalty

Re: kstats for em(4)

2020-07-07 Thread Hrvoje Popovski
On 7.7.2020. 10:51, David Gwynne wrote: > unfortunately em(4) covers a lot of chips of different vintages, so if > anyone has a super old one they can try this diff on with kstat enabled > in their kernel config, that would be appreciated. Hi, don't know if 82576 is old or super old but here

Re: fix races in if_clone_create()

2020-06-29 Thread Hrvoje Popovski
On 29.6.2020. 10:59, Vitaliy Makkoveev wrote: > I reworked tool for reproduce. Now I avoided fork()/exec() route and it > takes couple of minutes to take panic on 4 cores. Also some screenshots > attached. > > I hope anyone else will try it. Hi, i'm getting panic quite fast :) i will leave box

Re: vlan and bridge panic with latest snapshot

2020-06-22 Thread Hrvoje Popovski
On 22.6.2020. 11:11, Claudio Jeker wrote: > On Sun, Jun 21, 2020 at 08:51:53PM +0200, Hrvoje Popovski wrote: >> Hi all, >> >> with today's snapshot from 21-Jun-2020 09:34 >> OpenBSD 6.7-current (GENERIC.MP) #286: Sun Jun 21 08:51:29 MDT 2020 >> dera...@amd64.op

vlan and bridge panic with latest snapshot

2020-06-21 Thread Hrvoje Popovski
Hi all, with today's snapshot from 21-Jun-2020 09:34 OpenBSD 6.7-current (GENERIC.MP) #286: Sun Jun 21 08:51:29 MDT 2020 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP if i do "ifconfig vlan" i'm getting assert x3550m4# ifconfig vlan vlan100: flags=8splassert:

Re: multiple rings and cpus for ix(4)

2020-06-17 Thread Hrvoje Popovski
On 17.6.2020. 13:13, Jonathan Matthew wrote: > On Wed, Jun 17, 2020 at 12:50:46PM +0200, Hrvoje Popovski wrote: >> On 17.6.2020. 12:45, Hrvoje Popovski wrote: >>> On 17.6.2020. 11:27, Hrvoje Popovski wrote: >>>> On 17.6.2020. 10:36, David Gwynne wrote: >>>

Re: multiple rings and cpus for ix(4)

2020-06-17 Thread Hrvoje Popovski
On 17.6.2020. 12:45, Hrvoje Popovski wrote: > On 17.6.2020. 11:27, Hrvoje Popovski wrote: >> On 17.6.2020. 10:36, David Gwynne wrote: >>> this is an updated version of a diff from christiano haesbaert by way of >>> mpi@ to enable the use of multiple tx and rx rings w

Re: multiple rings and cpus for ix(4)

2020-06-17 Thread Hrvoje Popovski
On 17.6.2020. 11:27, Hrvoje Popovski wrote: > On 17.6.2020. 10:36, David Gwynne wrote: >> this is an updated version of a diff from christiano haesbaert by way of >> mpi@ to enable the use of multiple tx and rx rings with msi-x. >> >> the high level description is tha

Re: multiple rings and cpus for ix(4)

2020-06-17 Thread Hrvoje Popovski
On 17.6.2020. 10:36, David Gwynne wrote: > this is an updated version of a diff from christiano haesbaert by way of > mpi@ to enable the use of multiple tx and rx rings with msi-x. > > the high level description is that that driver checks to see if msix is > available, and if so how many vectors

Re: em(4) hw setup vs queues

2020-03-03 Thread Hrvoje Popovski
On 3.3.2020. 11:37, Martin Pieuchot wrote: > Currently em_hw_init() uses some hardcorded values to configure TX > rings. Diff below convert it to use the value of the first queue. > This is currently a no-op. It makes the code consistent with the > rest of the driver and reduce the size of

Re: em(4) towards multiqueues

2020-02-16 Thread Hrvoje Popovski
On 14.2.2020. 18:28, Martin Pieuchot wrote: > I'm running this on: > > em0 at pci1 dev 0 function 0 "Intel I210" rev 0x03: msi > em0 at pci0 dev 20 function 0 "Intel I354 SGMII" rev 0x03: msi > > More tests are always welcome ;) em0 at pci0 dev 25 function 0 "Intel 82579LM" rev

Re: bnxt(4), myx(4), vr(4): refill timeouts: timeout_add(..., 0) -> timeout_add(..., 1)

2020-01-20 Thread Hrvoje Popovski
On 20.1.2020. 17:40, Scott Cheloha wrote: > Appreciate the testing. np, i like testing network stuff :) > Given what dlg@ has said in the past I think there should only be a > performance change in a livelock situation. yeah, that could be problem with this testing ... kern.netlivelocks=6

Re: em(4) diff to test