Hi Mitch,

If you want to verify your protocol changes more thoroughly, I suggest
using the random memory tester via configs/example/memtest.py.

I'm appending a little python script I've used in the past to run the
tester on a variety of configurations.

Steve

-----

#! /usr/bin/env python

import os

base_cmd = "build/ALPHA_SE/m5.opt configs/example/memtest.py -l
5000000 --progress 0 "

for atomic in (False,): # use (True, False) to test atomic as well as
timing mode
    for cache_levels in (1, 2, 3):
        for cpus in (1, 2, 4):
            spec = ([cpus] * cache_levels) + [1]
            args = "-t " + ":".join(map(str,spec))
            if atomic:
                args += " -a"
            else:
                args += " --force-bus"
            filename = "trace" + args.replace(" ", "")
            print args.ljust(20),
            status = os.system(base_cmd + args + " > " + filename + " 2>&1")
            if status == 0:
                print "OK"
            else:
                print "FAILED"



On Mon, Oct 17, 2011 at 1:15 PM, Mitch Hayenga
<[email protected]> wrote:
> I actually did a slight modification of the m5 classic protocol to "fix"
> this.
> Basically, I allowed the data to remain dirty in the L2 & forwarded a
> version that looked clean-exclusive to the L1.  The way m5 is structured,
> the L2 would already snoop upwards to the L1 if it got a request from below,
> so I modified the handleSnoop() routine to check to see if an upper level
> cache had asserted memInhibit() (as would be done if the L1 had written the
> line).  If this was the case, the L2 wouldn't respond to a snoop from below
> in cache hierarchy.  If the L1 ever evicts the data (without writing), the
> L2 will properly respond to later snoops/requests.
> So far it seems to be "ok", my definition of ok being "runs splash".  And
> has greatly reduced the recorded number of writebacks from the L1 cache.  I
> haven't fully verified it though.
>
> On Mon, Oct 17, 2011 at 2:56 PM, Steve Reinhardt <[email protected]> wrote:
>>
>> Since the cache hierarchy doesn't enforce inclusion, neither one of
>> these is true.
>>
>> In general at most one cache in the system (across all levels) will be
>> the owner at any particular point in time.  So if an L1 has a block in
>> M or O state, no other L1 or L2 will have it in that state.
>>
>> Steve
>>
>> On Sun, Oct 16, 2011 at 9:12 AM, biswabandan panda <[email protected]>
>> wrote:
>> > HI all,
>> >            Whether the MOESI states in L2 (assuming last level & shared
>> > by
>> > all the cores) follow cache centric or memory centric approach in the
>> > classic snoop based coherence protocol?
>> >
>> > Cache centric - if one of the blocks is in  M state in any of the L1s
>> > and
>> > that block is there in L2 then it ll be also in M state, similarly S and
>> > O
>> >
>> > memory centric - if the blocks are in I state in L1, then the L2 ll have
>> > that block (if present in L2) in O state (because L2 ll behave like an
>> > owner
>> > for entire hierarchy)
>> >
>> >
>> >
>> > --
>> >
>> > thanks&regards
>> > BISWABANDAN
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > gem5-users mailing list
>> > [email protected]
>> > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to