Thank you for the quick reply, just to make sure I'm understanding
correctly, are these two configuration's correct then?
http://i.imgur.com/Dnym7rQ.png

And can each port read and write simulataneously? If so, in the
configuration with no l2, then cold the l1 caches technically have twice
the total bandwidth or is it limited to one port reading/writing at a time?

Thank you again.

On Thu, Feb 21, 2013 at 7:39 PM, Tao Zhang <[email protected]> wrote:

> **
> Hi Gabriel,
>
> When you use both L1 and L2 cache, the second figure is correct. When you
> only use L1, Icache and Dcache will connect to memory bus separately. (just
> remove L2 in figure 2). You can refer to configs/common/CacheConfig.py for
> the connection detail.
>
> -Tao
>
>
> On 02/21/2013 07:30 PM, Gabriel Yessin wrote:
>
> I'm currently using the ARM cpu and the detailed configuration, example
> input as:
>
> ./build/ARM/gem5.fast ... configs/example/fs.py ...  --caches
> --cpu-type=detailed --l1d_size=32kB --l1i_size=32kB --l2cache
> --l2_size=2048kB --clock=0.75GHz
>
>  I'm trying to understand  the exact configuration here - Does each l1
> cache have a separate port going to the cpu, and a separate port going to
> the l2 cache?
>
>  If there were no l2 cache, would each l1 cache have a direct port to
> memory?
>
>  In other words, I'm trying to determine which, if any, of these figures
> accurately portrays this memory configuration.
>
>  http://imgur.com/a/8WVeR
>
>
> _______________________________________________
> gem5-users mailing 
> [email protected]http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Gabriel Yessin
B.S. Biomedical Engineering, May 2011
M.S. Computer Engineering, May 2013
The George Washington University
774.238.0101
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to