On Mon, 8 Jun 2009, Lal wrote: > >> I am using 2.6.21 kernel and CONFIG_E1000_NAPI is defined. It's a > >> multi-core system. > >> > >> In e1000_intr the interface, over which packet is received, is queued > >> in poll_list (per cpu based). > > > > this is the key. > > > >> Later net_rx_action invokes dev->poll which invokes e1000_clean > >> function. e1000_clean invokes e1000_clean_rx_irq in turn. Although, > >> this function call is made on all cpu, rx_ring is common data > >> structure, but is not protected. > >> > >> Is rx_ring per cpu based or common to all cpus ? > > > > The OS guarantees that we will never have two poll events running > > simultaneously. > > > > Thanks Jesse, this answers my question. > Having said this, can I conclude that on a muti-core or SMP system; > from a given interface at a time only one core/cpu will be processing > a packet and remaining waiting for netpoll lock (for the given > interface)?
Yes, for interfaces that do not use RSS to spread flows out to multiple RX queues, and have MSI-X > If yes, this is under utilization of cores. How to overcome this? > I am facing a situation where one core usages goes 100% while rest remain > idle. Some patches from Tom Herbert at google recently went to netdev for review and basically do exactly what you request, fan out flows to multiple CPUs for adapters that do not have RSS+MSI-X ------------------------------------------------------------------------------ _______________________________________________ E1000-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/e1000-devel
