assign block size to 32 in both the caches and so on. I think there are some
issues with 16byte block size.

On Sun, Jan 23, 2011 at 1:14 AM, sunitha p <[email protected]> wrote:

> Hi all,
>
> I am trying to change the block sizes of L1 and L2 caches.
>
> Have changed L1 cache block size =16 and L2 to 32 in run.py file
>
> but finding these errors.
>
> M5 Simulator System
>
> Copyright (c) 2001-2008
> The Regents of The University of Michigan
> All Rights Reserved
>
>
> M5 compiled Jan 13 2011 12:49:12
> M5 revision Unknown
> M5 started Jan 23 2011 01:12:36
> M5 executing on sunita
> command line: build/ALPHA_SE/m5.debug configs/splash2/run.py --rootdir
> splash/splash2/codes -t -m 10000000000 --frequency 1GHz -n 4 --l1size 16kB
> --l2size 64kB --l1latency 4ns --l2latency 11ns -b FMM
> Global frequency set at 1000000000000 ticks per second
> 0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
> 0: system.remote_gdb.listener: listening for remote gdb #1 on port 7001
> 0: system.remote_gdb.listener: listening for remote gdb #2 on port 7002
> 0: system.remote_gdb.listener: listening for remote gdb #3 on port 7003
> info: Entering event queue @ 0.  Starting simulation...
> info: Increasing stack size by one page.
> m5.debug: build/ALPHA_SE/mem/cache/cache_impl.hh:347: bool
> Cache<TagStore>::access(Packet*, typename TagStore::BlkType*&, int&,
> PacketList&) [with TagStore = LRU]: Assertion `blkSize == pkt->getSize()'
> failed.
> Program aborted at cycle 173377000
> Aborted
>
> kindly help
>
> --
> Sunitha.P
> 9092892876
>
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>



-- 

*thanks&regards
*
*BISWABANDAN PANDA*
*M.S.(RESEARCH SCHOLAR)*
*RISE LAB*
*IIT MADRAS*

http://www.cse.iitm.ac.in/~biswa/ <http://www.cse.iitm.ac.in/%7Ebiswa/>

"Happy new Year 2011"
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to