Hi Ali.
I don't think just a direct mapping from virtual to physical addresses
can explain this behavior. In the benchmark, I have one, contiguous
chunk of data that I read repeatedly. In the case where I see the
strange behavior, the direct-mapped data cache is 2x bigger than the
data. Even with worst-case mapping, this should produce very few collisions.
Is there a way to see what's going on under the hood in gem5 that might
be causing the cache to miss? I've ruled out the compiler doing anything
strange.
Thanks,
Erik
On 12/03/13 18:26, Ali Saidi wrote:
Hi Erik,
It doesn't, but memory allocation is pretty dump in SE mode. VA -> PA,
so it's certainly possible you're getting into a case where lots of
things conflict.
Ali
On 12.03.2013 16:27, Erik Tomusk wrote:
Hello All,
Does the classic memory model do any sort of address hashing or other
similar magic when storing data in the L1D cache?
I've been running a very simple microbenchmark with varying sizes of the
L1D cache and data set (in SE mode). For a very small number of
combinations of data set and cache size, miss rate goes through the roof
(>50x what might be expected based on other simulations). This is
consistent with e.g. a collision in a hash function for mapping logical
addresses to physical ones.
I thought I'd ask if this is expected behavior before I go digging
through the code.
Thanks,
Erik
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users