> On July 31, 2013, 10:17 p.m., Andreas Hansson wrote: > > configs/common/Caches.py, line 54 > > <http://reviews.gem5.org/r/1809/diff/12/?file=36785#file36785line54> > > > > Could we not let num_banks be 1 by default and leave this out here? > > > > Similary so for the interleaving bit. Make it 0 by default and if > > num_banks is 1 then it is never needed. > > Xiangyu Dong wrote: > I think I have some confusion here. If the command-line never explicitly > specify a parameter value, where is the standard place we set the default > value in the code? In BaseCache.py?
Yes I was thinking BaseCache.py > On July 31, 2013, 10:17 p.m., Andreas Hansson wrote: > > configs/common/Options.py, line 124 > > <http://reviews.gem5.org/r/1809/diff/12/?file=36787#file36787line124> > > > > I am not sure it's worth removing this functionality, but would we ever > > want to interleave on anything besides cache line granularity? > > Xiangyu Dong wrote: > I will set the default to 0, and 0 means "automatically" set the > interleaving bit to cache line granularity. The user can override this > decision by setting this parameter. Great. Thanks! > On July 31, 2013, 10:17 p.m., Andreas Hansson wrote: > > src/mem/cache/base.cc, line 81 > > <http://reviews.gem5.org/r/1809/diff/12/?file=36791#file36791line81> > > > > I would suggest p->bank_intlv_high_bit ? p->bank_intlv_high_bit : > > ceilLog2(blkSize) + bankIntlvBits > > Xiangyu Dong wrote: > I see. But, it should be "ceilLog2(blkSize) + bankIntlvBits - 1", right? My bad, should have had my morning coffee first. > On July 31, 2013, 10:17 p.m., Andreas Hansson wrote: > > src/mem/cache/cache_impl.hh, line 308 > > <http://reviews.gem5.org/r/1809/diff/12/?file=36792#file36792line308> > > > > else? fatal? lat = readLatency? > > > > I would think if pkt->isWrite(), lat = writeLatency, and in all other > > cases lat = readLatency. > > Xiangyu Dong wrote: > Here is the part I really want to discuss with you more. What does &lat > returned from tags->accessBlock() mean? I can see most of the time it > returns a readLatency, but in some cases, cache->ticksToCycles(blk->whenReady > - curTick()) is returned. > > I'm not sure how it exactly works. My understanding is: the "lat" > returned from tags->accessBlock() should be the "tag lookup latency", and > depending on whether we access tag and data in parallel or not, the final > latency should be max(lat, read(write)Latency) or lat+read(write)Latency. It would be good to get some input from e.g. Steve here as I am not too familiar with the intention here. What you say makes a lot of sense to me, and the accessBlock latency calculation should probably be changed accordingly. Will you do so (possibly in a separate patch if this is getting too much)? > On July 31, 2013, 10:17 p.m., Andreas Hansson wrote: > > src/mem/cache/cache_impl.hh, line 1812 > > <http://reviews.gem5.org/r/1809/diff/12/?file=36792#file36792line1812> > > > > Could it be extended? > > Xiangyu Dong wrote: > As you point out, this could happen. extendService() function is a > workaround in case we cannot block incoming traffic from memory side during > bank busy. We need to apply another patch (#1826) to block the cache > installation (from memory side) when bank is busy, and after that, > extendService() function can be removed. Ok, what I was a bit nervous about was that we schedule the retry based on what we see right now. Then if the service is extended the cache is still blocked. This is highlighting that the event-based unblocking is the way to go. I am not sure it would ever cause problems, but it is confusing that we schedule a retry (and claim that we know when it is free), and then later on it is not free when we say go. Does that make sense? - Andreas ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/1809/#review4560 ----------------------------------------------------------- On Aug. 1, 2013, 7:15 a.m., Xiangyu Dong wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/1809/ > ----------------------------------------------------------- > > (Updated Aug. 1, 2013, 7:15 a.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > ------- > > Changeset 9818:438f37f51ba2 > --------------------------- > mem: model data array bank in classic cache > The classic cache does not model data array bank, i.e. if a read/write is > being > serviced by a cache bank, no other requests should be sent to this bank. > This patch models a multi-bank cache. Features include: > 1. detect if the bank interleave granularity is larger than cache line size > 2. add CacheBank debug flag > 3. Differentiate read and write latency > 3a. read latency is named as read_latency > 3b. write latency is named as write_latency > 4. Add write_latency, num_banks, bank_itlv_bit into the Python parser > 5. Enabling bank model by --l1-bank-model, --l2-bank-model, --l3-bank-model > Not modeled in this patch: > Due to the lack of retry mechanism in the cache master port, the access form > the memory side will not be denied if the bank is in service. Instead, the > bank > service time will be extended. This is equivalent to an infinite write buffer > for cache fill operations. > > > Diffs > ----- > > configs/common/CacheConfig.py 2492d7ccda7e > configs/common/Caches.py 2492d7ccda7e > configs/common/O3_ARM_v7a.py 2492d7ccda7e > configs/common/Options.py 2492d7ccda7e > src/mem/cache/BaseCache.py 2492d7ccda7e > src/mem/cache/SConscript 2492d7ccda7e > src/mem/cache/base.hh 2492d7ccda7e > src/mem/cache/base.cc 2492d7ccda7e > src/mem/cache/cache_impl.hh 2492d7ccda7e > src/mem/cache/tags/Tags.py 2492d7ccda7e > > Diff: http://reviews.gem5.org/r/1809/diff/ > > > Testing > ------- > > > Thanks, > > Xiangyu Dong > > _______________________________________________ gem5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/gem5-dev
