Hello everyone, 

I am trying to simulate a system with 4 cpus and 
one shared L1 cache (cache shared among cpus).
I tried to modify MESI_CMP_directory.py in
/gem5/configs/ruby by changing this code:
(I actually removed the "for" command and replaced i with 0 - 
the idea was to create only one L1 cache)

###################################################################
    for i in xrange(options.num_cpus):
        #
        # First create the Ruby objects associated with this cpu
        #
        l1i_cache = L1Cache(size = options.l1i_size,
                            assoc = options.l1i_assoc,
                            start_index_bit = block_size_bits,
                            is_icache = True)
        l1d_cache = L1Cache(size = options.l1d_size,
                            assoc = options.l1d_assoc,
                            start_index_bit = block_size_bits,
                            is_icache = False)

        l1_cntrl = L1Cache_Controller(version = i,
                                      cntrl_id = cntrl_count,
                                      L1IcacheMemory = l1i_cache,
                                      L1DcacheMemory = l1d_cache,
                                      l2_select_num_bits = l2_bits,
                                      send_evictions = (
                                          options.cpu_type ==
"detailed"),
                                      ruby_system = ruby_system)

        cpu_seq = RubySequencer(version = i,
                                icache = l1i_cache,
                                dcache = l1d_cache,
                                ruby_system = ruby_system)

        l1_cntrl.sequencer = cpu_seq

        if piobus != None:
            cpu_seq.pio_port = piobus.slave

        exec("system.l1_cntrl%d = l1_cntrl" % i)
        
        #
        # Add controllers and sequencers to the appropriate lists
        #
        cpu_sequencers.append(cpu_seq)
        l1_cntrl_nodes.append(l1_cntrl)
        
        cntrl_count += 1
###################################################################


to this:


###################################################################
    #for i in xrange(options.num_cpus):
    #
    # First create the Ruby objects associated with this cpu
    #
    l1i_cache = L1Cache(size = options.l1i_size,
                        assoc = options.l1i_assoc,
                        start_index_bit = block_size_bits,
                        is_icache = True)
    l1d_cache = L1Cache(size = options.l1d_size,
                        assoc = options.l1d_assoc,
                        start_index_bit = block_size_bits,
                        is_icache = False)

    l1_cntrl = L1Cache_Controller(version = 0,
                                  cntrl_id = cntrl_count,
                                  L1IcacheMemory = l1i_cache,
                                  L1DcacheMemory = l1d_cache,
                                  l2_select_num_bits = l2_bits,
                                  send_evictions = (
                                      options.cpu_type == "detailed"),
                                  ruby_system = ruby_system)

    cpu_seq = RubySequencer(version = 0,
                            icache = l1i_cache,
                            dcache = l1d_cache,
                            ruby_system = ruby_system)

    l1_cntrl.sequencer = cpu_seq

    if piobus != None:
        cpu_seq.pio_port = piobus.slave

    exec("system.l1_cntrl%d = l1_cntrl" % i)
    
    #
    # Add controllers and sequencers to the appropriate lists
    #
    cpu_sequencers.append(cpu_seq)
    l1_cntrl_nodes.append(l1_cntrl)
    
    cntrl_count += 1


and I have deleted the code that creates the L2 caches.

The problem is that I get errors when I try to run simulations.


It would be thankful if someone could help me to understand
what is going wrong and how I could get L1 shared cache working.

Thanks in advance, 
Pavlos



_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to