On Tue, 28 Jul 2020 22:29:02 +0530 Rakesh Pillai wrote: > > -----Original Message----- > > From: David Laight <[email protected]> > > Sent: Sunday, July 26, 2020 4:46 PM > > To: 'Sebastian Gottschall' <[email protected]>; Hillf Danton > > <[email protected]> > > Cc: Andrew Lunn <[email protected]>; Rakesh Pillai = > <[email protected]>; > > [email protected]; [email protected]; linux- > > [email protected]; [email protected]; > > [email protected]; Markus Elfring <[email protected]>; > > [email protected]; [email protected]; [email protected]; > > [email protected]; [email protected] > > Subject: RE: [RFC 0/7] Add support to process rx packets in thread > >=20 > > From: Sebastian Gottschall <[email protected]> > > > Sent: 25 July 2020 16:42 > > > >> i agree. i just can say that i tested this patch recently due = > this > > > >> discussion here. and it can be changed by sysfs. but it doesnt = > work for > > > >> wifi drivers which are mainly using dummy netdev devices. for = > this i > > > >> made a small patch to get them working using napi_set_threaded > > manually > > > >> hardcoded in the drivers. (see patch bellow) > >=20 > > > > By CONFIG_THREADED_NAPI, there is no need to consider what you did > > here > > > > in the napi core because device drivers know better and are = > responsible > > > > for it before calling napi_schedule(n). > >=20 > > > yeah. but that approach will not work for some cases. some stupid > > > drivers are using locking context in the napi poll function. > > > in that case the performance will runto shit. i discovered this with = > the > > > mvneta eth driver (marvell) and mt76 tx polling (rx works) > > > for mvneta is will cause very high latencies and packet drops. for = > mt76 > > > it causes packet stop. doesnt work simply (on all cases no crashes) > > > so the threading will only work for drivers which are compatible = > with > > > that approach. it cannot be used as drop in replacement from my = > point of > > > view. > > > its all a question of the driver design > >=20 > > Why should it make (much) difference whether the napi callbacks (etc) > > are done in the context of the interrupted process or that of a > > specific kernel thread. > > The process flags (or whatever) can even be set so that it appears > > to be the expected 'softint' context. > >=20 > > In any case running NAPI from a thread will just show up the next > > piece of code that runs for ages in softint context. > > I think I've seen the tail end of memory being freed under rcu > > finally happening under softint and taking absolutely ages. > >=20 > > David > >=20 > > Hi All, > > Is the threaded NAPI change posted to kernel ?
https://lore.kernel.org/netdev/[email protected]/ https://lore.kernel.org/netdev/[email protected]/ > Is the conclusion of this discussion that " we cannot use threads for > processing packets " ?? That isn't it if any conclusion reached. Hard to answer your question TBH, and OTOH I'm wondering in which context device driver developer prefers to handle tx/rx, IRQ or BH or user context on available idle CPUs, what is preventing them from doing that? Is it likely making ant-antenna-size sense to set the napi::weight to 3 and turn to 30 kworkers for processing the ten-minute packet flood hitting the hardware for instance on a system with 32 CPU cores or more? _______________________________________________ ath10k mailing list [email protected] http://lists.infradead.org/mailman/listinfo/ath10k
