"Paul E. McKenney" (on Tue, 17 Jan 2006 09:26:17 -0800) wrote: >On Thu, Jan 12, 2006 at 07:17:41AM -0800, Paul E. McKenney wrote: >> On Thu, Jan 12, 2006 at 06:50:34PM +1100, Keith Owens wrote: >> > "Paul E. McKenney" (on Wed, 11 Jan 2006 22:51:15 -0800) wrote: >> > >On Thu, Jan 12, 2006 at 05:19:01PM +1100, Keith Owens wrote: >> > >> OK, I have thought about it and ... >> > >> >> > >> notifier_call_chain_lockfree() must be called with preempt disabled. >> > >> >> > >Fair enough! A comment, perhaps? In a former life I would have also >> > >demanded debug code to verify that preemption/interrupts/whatever were >> > >actually disabled, given the very subtle nature of any resulting bugs... >> > >> > Comment - OK. Debug code is not required, the reference to >> > smp_processor_id() already does all the debug checks that >> > notifier_call_chain_lockfree() needs. CONFIG_PREEMPT_DEBUG is your >> > friend. >> >> Ah, debug_smp_processor_id(). Very cool!!! > >One other thing -- given that you are requiring that the read side >have preemption disabled, another update-side option would be to >use synchronize_sched() to wait for all preemption-disabled code >segments to complete. This would allow you to remove the per-CPU >reference counters from the read side, but would require that the >update side either (1) be able to block or (2) be able to defer >the cleanup to process context.
Originally I looked at that code but the comment scared me off. synchronize_sched() maps to synchronize_rcu() and the comment says that this only protects against rcu_read_lock(), not against preempt_disable(). /** * synchronize_sched - block until all CPUs have exited any non-preemptive * kernel code sequences. * * This means that all preempt_disable code sequences, including NMI and * hardware-interrupt handlers, in progress on entry will have completed * before this primitive returns. However, this does not guarantee that * softirq handlers will have completed, since in some kernels * * This primitive provides the guarantees made by the (deprecated) * synchronize_kernel() API. In contrast, synchronize_rcu() only * guarantees that rcu_read_lock() sections will have completed. */ #define synchronize_sched() synchronize_rcu() ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 _______________________________________________ ckrm-tech mailing list https://lists.sourceforge.net/lists/listinfo/ckrm-tech