Hi Amin,
Yes, that's my current assumption. I think it means the underlying SRAM cell only has one read-write port. I guess that's the case for most L2/L3 caches though L1 might be built with SRAM bitcells with multiple ports. That might be the next thing to do after this patch becomes formal and stable. Thank you! Best, Xiangyu From: Amin Farmahini [mailto:[email protected]] Sent: Monday, April 15, 2013 8:31 PM To: gem5 Developer List Cc: Xiangyu Dong Subject: Re: [gem5-dev] Review Request: mem: model data array bank in classic cache Hi Xiangyu, Do you assume each cache bank is implemented as a blocking bank? I mean do you assume when a bank is servicing a request, it will not accept any more requests until the former request is serviced? Thanks, Amin On Mon, Apr 15, 2013 at 7:01 PM, Ali Saidi <[email protected]> wrote: ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/1809/#review4232 ----------------------------------------------------------- Thanks for this patch, it's nice to be able to model bank contention! What about blocking the port when there is a bank conflict and some number of write buffers are occupied? configs/common/Options.py <http://reviews.gem5.org/r/1809/#comment3994> because the are defaults all these are going to override the defaults in Caches.py when ConfigCaches.py is run. we do this other places, but it's good to know and perhaps they should be the same latency as the other defaults? - Ali Saidi On March 31, 2013, 3:48 p.m., Xiangyu Dong wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/1809/ > ----------------------------------------------------------- > > (Updated March 31, 2013, 3:48 p.m.) > > > Review request for Default. > > > Description > ------- > > Changeset 9627:6122d201ff80 > --------------------------- > mem: model data array bank in classic cache > The classic cache does not model data array bank, i.e. if a read/write is being > serviced by a cache bank, no other requests should be sent to this bank. > This patch models a multi-bank cache. Features include: > 1. detect if the bank interleave granularity is larger than cache line size > 2. add CacheBank debug flag > 3. Differentiate read and write latency > 3a. read latency is still named as hit_latency > 3b. write latency is named as write_latency > 4. Add write_latency, num_banks, bank_itlv_bit into the Python parser > Not modeled in this patch: > Due to the lack of retry mechanism in the cache master port, the access form > the memory side will not be denied if the bank is in service. Instead, the bank > service time will be extended. This is equivalent to an infinite write buffer > for cache fill operations. > > > Diffs > ----- > > configs/common/Caches.py 47591444a7c5 > configs/common/Options.py 47591444a7c5 > src/mem/cache/BaseCache.py 47591444a7c5 > src/mem/cache/SConscript 47591444a7c5 > src/mem/cache/base.hh 47591444a7c5 > src/mem/cache/base.cc 47591444a7c5 > src/mem/cache/cache_impl.hh 47591444a7c5 > configs/common/CacheConfig.py 47591444a7c5 > > Diff: http://reviews.gem5.org/r/1809/diff/ > > > Testing > ------- > > > Thanks, > > Xiangyu Dong > > _______________________________________________ gem5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/gem5-dev _______________________________________________ gem5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/gem5-dev
