On July 20, 2013, 10:05 a.m., Xiangyu Dong wrote: > > A few more things that are popping up. Thanks for all the updates. It would > > be great if Ali or Steve could also have a look before we ship this. > > Amin Farmahini wrote: > Xiangyu, > (1) I think you consider a centralized MSHR for all banks. As far as I > know, most banked caches use distributed MSHR. > (2) I see that you reject a request if the corresponding bank is busy. > Now what about responses? Can a bank send back two responses at the same time > (one response for a hit, one response for a previous miss)? I think that is > allowed in your patch which does not seem right to me. > > Xiangyu Dong wrote: > (1) I think the MSHR belongs to the upper level of the cache in gem5. > Which particular cache design you are referring to? I can take a look to see > if their L2 contains multiple MSHRs for different banks in L3. Personally, I > don't think so. > (2) No. a later patch (under my name) will also block the cache line > installation if the cache bank is busy. So, there won't be a situation where > the cache bank is handling a hit and a miss at the same time. > > Amin Farmahini wrote: > (1) For any level of cache you can go with a centralized MSHR or you can > go with a distributed one (a small MSHR for each bank). I am not saying you > should do it. I am just saying it is good to have a distributed one for many > reasons. Currently, gem5 only supports a centrailzed MSHR for each cache > level, but that's because caches are not banked. > (2) let me ask this. Is each bank blocking or non-blocking? I mean could > you have multiple on-the-fly requests for a bank? > And also can you explain what you mean by "cache line installation"? > Google could not help me out. > Note: I am not trying to oppose your patch, I am just trying to find out > if it can be improved. > > Xiangyu Dong wrote: > (1) I understand what you mean. I'm just curious about your saying that > "most banked caches use distributed MSHR". Any reference will be very > helpful for me to better understand it. > (2) I think a busy bank means its internal circuit is doing something, > and the circuit only does one thing at a time. Cache line installation means > "filling the cache line (e.g. L2) using the data from the next level (e.g > L3)".
(1) see "Scalable Cache Miss Handling for High Memory-Level Parallelism" (2) What you describe complies with a blocking bank design. I am not an expert, but as far as i know each bank could be implemented as a non-blocking bank (that's why you have an MSHR file for each bank, read the paper above). In terms of performance, the method you mention might work well for L2 and L3, but not for L1. I hope other people comment on this. - Amin ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/1809/#review4542 ----------------------------------------------------------- On July 31, 2013, 9:52 p.m., Xiangyu Dong wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/1809/ > ----------------------------------------------------------- > > (Updated July 31, 2013, 9:52 p.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > ------- > > Changeset 9818:d6c890fd0eab > --------------------------- > mem: model data array bank in classic cache > The classic cache does not model data array bank, i.e. if a read/write is > being > serviced by a cache bank, no other requests should be sent to this bank. > This patch models a multi-bank cache. Features include: > 1. detect if the bank interleave granularity is larger than cache line size > 2. add CacheBank debug flag > 3. Differentiate read and write latency > 3a. read latency is still named as hit_latency > 3b. write latency is named as write_latency > 4. Add write_latency, num_banks, bank_itlv_bit into the Python parser > Not modeled in this patch: > Due to the lack of retry mechanism in the cache master port, the access form > the memory side will not be denied if the bank is in service. Instead, the > bank > service time will be extended. This is equivalent to an infinite write buffer > for cache fill operations. > > > Diffs > ----- > > configs/common/CacheConfig.py 2492d7ccda7e > configs/common/Caches.py 2492d7ccda7e > configs/common/O3_ARM_v7a.py 2492d7ccda7e > configs/common/Options.py 2492d7ccda7e > src/mem/cache/BaseCache.py 2492d7ccda7e > src/mem/cache/SConscript 2492d7ccda7e > src/mem/cache/base.hh 2492d7ccda7e > src/mem/cache/base.cc 2492d7ccda7e > src/mem/cache/cache_impl.hh 2492d7ccda7e > src/mem/cache/tags/Tags.py 2492d7ccda7e > > Diff: http://reviews.gem5.org/r/1809/diff/ > > > Testing > ------- > > > Thanks, > > Xiangyu Dong > > _______________________________________________ gem5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/gem5-dev
