More info on ping-pong effect, as well as L1 vs L2 cache:

http://morecores.cn/publication/pdf/Computing%20PI%20to%20Understand%20Key%20Issues%20in%20Multi.pdf

On Sep 28, 10:02 am, peter teoh <tteik...@dso.org.sg> wrote:
> http://everything2.com/index.pl?node_id=1382347cache line 
> ping-pong(idea)bycebixSat Nov 02 2002 at 13:45:57
>
>
>
> One way of maintainingcache coherenceinmultiprocessingdesigns withCPUs that 
> have localcaches is to ensure that singlecache lines are never held by more 
> than one CPU at a time. Withwrite-through caches, this is easily implemented 
> by having the CPUs invalidate cache lines onsnoophits.
>
> However, if multiple CPUs are working on the same set of data from main 
> memory, this can lead to the following scenario:CPU #1 reads a cache line 
> from memory.CPU #2 reads the same line, CPU #1 snoops the access and 
> invalidates its local copy.CPU #1 needs the data again and has to re-read 
> theentirecache line, invalidating the copy in CPU #2 in the process.CPU #2 
> now also re-reads the entire line, invalidating the copy in CPU #1.Lather, 
> rinse, repeat.
>
> The result is a dramaticperformanceloss because the CPUs keep fetching the 
> same data over and over again from slow main memory.
>
> Possible solutions include:Use a smarter cache coherence protocol, such 
> asMESI.Mark the address space in question as cache-inhibited. Most CPUs will 
> then resort to single-word accesses which should be faster than reloading 
> entire cache lines (usually 32 or 64 bytes).If the data set is small, make 
> one copy in memory for each CPU.If the data set is large and processed 
> sequentially, have each CPU work on a different part of it (one starting at 
> the beginning, one at the middle, etc.).

Reply via email to