On Fri, 2012-02-10 at 22:05 -0500, Neil Horman wrote: > > This approach looks fine as we discussed before, it might be worth > > checking to restore soft irq later by checking perf if we could see > > possible fix even with added rcu locking is fine as rcu are cheap > but at > > the moment don't know how. May think of fixing by removing too many > per > > cpu fcoe rx thread and instead using either work queues or soft irq > > context for rx path only. > > > Agreed, my thought was to use rcu locking to toggle between two list > heads in rx > thread context. i.e. for each rx thread : > 1) lock the rx_list lock > 2) do an rcu_assign_pointer of one of two list heads to a list head > pointer > 3) call synchronize_rcu > 4) process all items on the list that was not assigned to the list > pointer > > If we do that, then the softirq context can just take the > rcu_read_lock, > dereference the list head pointer in (2), do a list_add, and a > rcu_read_unlock. > That would be considerably more lightweight than sharing the lock > between > softirq and process contexts. perf runs could give us a more exact > speedup.
The rcu works best with mostly not changing structures and disabling _bh on per cpu lock shouldn't be expensive and instead synchronize_rcu may take longer on higher cpus systems but as you said perf run would give exact measure with the change so it might be worth trying to check that. Thanks Vasu _______________________________________________ devel mailing list devel@open-fcoe.org https://lists.open-fcoe.org/mailman/listinfo/devel