On Tue, 06 Jan, at 06:36:41PM, Peter Zijlstra wrote: > On Fri, Nov 14, 2014 at 09:15:11PM +0000, Matt Fleming wrote: > > @@ -417,17 +857,38 @@ static u64 intel_cqm_event_count(struct perf_event > > *event) > > if (!cqm_group_leader(event)) > > return 0; > > > > - on_each_cpu_mask(&cqm_cpumask, __intel_cqm_event_count, &rr, 1); > > + /* > > + * Notice that we don't perform the reading of an RMID > > + * atomically, because we can't hold a spin lock across the > > + * IPIs. > > + * > > + * Speculatively perform the read, since @event might be > > + * assigned a different (possibly invalid) RMID while we're > > + * busying performing the IPI calls. It's therefore necessary to > > + * check @event's RMID afterwards, and if it has changed, > > + * discard the result of the read. > > + */ > > + raw_spin_lock_irqsave(&cache_lock, flags); > > + rr.rmid = event->hw.cqm_rmid; > > + raw_spin_unlock_irqrestore(&cache_lock, flags); > > You don't actually have to hold the lock here, only ACCESS_ONCE() or > whatever newfangled thing replaced that. Remind me again, are accesses to 'int' guaranteed to be atomic? There's no way to read a partial value?
-- Matt Fleming, Intel Open Source Technology Center -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

