[gem5-users] Re: 'SConsEnvironment' object has no attribute 'M4': when building
Have you checked if you have M4 package installed in your machine ? On Thu, Jan 25, 2024 at 11:41 AM Ioannis Constantinou via gem5-users < gem5-users@gem5.org> wrote: > Hello all, > > So I’m trying to build gem5 on a new machine and I get the following error. > > > scons: Reading SConscript files ... > > Checking for linker -Wl,--as-needed support... yes > > Checking for compiler -gz support... yes > > Checking for linker -gz support... yes > > Info: Using Python config: python3-config > > Checking for C header file Python.h... yes > > Checking Python version... 3.10.4 > > Checking for accept(0,0,0) in C++ library None... yes > > Checking for zlibVersion() in C++ library z... yes > > Checking for C library tcmalloc... no > > Checking for C library tcmalloc_minimal... no > > *Warning: You can get a 12% performance improvement by installing tcmalloc > (libgoogle-perftools-dev package on Ubuntu or RedHat).* > > Checking for char temp; backtrace_symbols_fd((void *), 0, 0) in C > library None... yes > > Checking for C header file fenv.h... yes > > Checking for C header file png.h... yes > > Checking for clock_nanosleep(0,0,NULL,NULL) in C library None... yes > > Checking for C header file valgrind/valgrind.h... no > > Checking for pkg-config package hdf5-serial... no > > Checking for pkg-config package hdf5... no > > Checking for H5Fcreate("", 0, 0, 0) in C library hdf5... no > > *Warning: Couldn't find HDF5 C++ libraries. Disabling HDF5 support.* > > Checking for C header file linux/if_tun.h... yes > > Checking for shm_open("/test", 0, 0) in C library None... no > > Checking for shm_open("/test", 0, 0) in C library rt... yes > > Checking for C header file linux/kvm.h... yes > > Checking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library > None... yes > > Checking size of struct kvm_xsave ... yes > > Checking for member exclude_host in struct perf_event_attr...yes > > Checking for pkg-config package protobuf... yes > > Checking for GOOGLE_PROTOBUF_VERIFY_VERSION in C++ library protobuf... yes > > AttributeError: 'SConsEnvironment' object has no attribute 'M4': > > File "/onyx/data/p182/GEM5_STABLE_VERSION/gem5_qemu_virt/SConstruct", > line 602: > > main.SConscript(os.path.join(root, 'SConscript'), > > File > "/nvme/h/buildsets/eb_cyclone_rl/software/SCons/4.4.0-GCCcore-11.3.0/lib/python3.10/site-packages/SCons/Script/SConscript.py", > line 597: > > return _SConscript(self.fs, *files, **subst_kw) > > File > "/nvme/h/buildsets/eb_cyclone_rl/software/SCons/4.4.0-GCCcore-11.3.0/lib/python3.10/site-packages/SCons/Script/SConscript.py", > line 285: > > exec(compile(scriptdata, scriptname, 'exec'), call_stack[-1].globals) > > File > "/onyx/data/p182/GEM5_STABLE_VERSION/gem5_qemu_virt/build/libelf/SConscript", > line 121: > > m4env.M4(target=File('libelf_convert.c'), > > > > I have experience with building gem5 but on this machine I can’t figure > out what the problem is. Did anyone faced the same issue before? > > The gem5 version I use is 21.1.0.2 > > GCC version is 11.3 > > Python 3.10.4 > > Thank you in advance, > Ioannis Constantinou. > > > ___ > gem5-users mailing list -- gem5-users@gem5.org > To unsubscribe send an email to gem5-users-le...@gem5.org > ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] How to pass the reference of a simobject as a parameter of another simobject as configuration parameter
Can you pass reference of a simobject as a parameter to another simobject in the configuration ( config.py ) file? How? For example, I have an independent simobject class like Center_Ctrl and a class Cache. ( Center_Ctrl could not be a variable of the class object Cache but Cache could have reference ( pointer) of center_ctrl to call some demo function and vice versa) Now, I want something like this: center_ ctrl = Center_Ctrl( parameters) cache1 = Cache( parameters) cache2=Cache(parameters) cache1.cctrl = center_ctrl cache2.cctrl = center_ctrl and inside the Cache class, this cctrl variable will have the pointer or reference to the center_ctrl simobject. How could I do that ? ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Re: How to solve "AttributeError: Can't resolve proxy" error when l1icache is replaced with new module
Right now my controller object just intended to work as wrapper for icache ( when I could ensure that I could take request from cpu and send it to icache and vice versa, I'll start working on my main target). my simple script: > ## import librarie # Parameters ## > > #Memory > L1Cache = "32KiB" # same size data and instruction cache > Asso_L1 = 4 # associativity > Asso_L2 = 8 > L2Cache = "256KiB" # 8 times of L1 > L3Cache = "4096KiB" #16 times of L2 > DRAM_size = "2GiB" > ## config for interconnect netowrk > number_of_cpu = 2 > ClkFreq = "3GHz" > isa=ISA.X86 > > ## requires( > isa_required=ISA.X86, > kvm_required=True, > ) > > cache_hierarchy = > PrivateL1SharedL2CacheHierarchy(l1d_size=L1Cache,l1i_size=L1Cache,l2_size=L2Cache,l1d_assoc=Asso_L1,l1i_assoc=Asso_L1,l2_assoc=Asso_L2) > memory = SingleChannelDDR3_1600(DRAM_size) > processor = SimpleProcessor( cpu_type=CPUTypes.O3, > num_cores=number_of_cpu,isa=isa) > board = SimpleBoard(clk_freq=ClkFreq, processor=processor, memory=memory, > cache_hierarchy=cache_hierarchy) > #simulation setup > binary = BinaryResource('/path/to/binary') > board.set_se_binary_workload(binary) > #simulation start > simulator = Simulator(board=board) > simulator.run() In PrivateL1SharedL2CacheHierarchy(), I changed L1ICache with L1PrivateCtrl() which takes same argument as l1icache self.l1icaches = [ > L1PrivateCtrl( > icache_size=self._l1i_size, > icache_assoc=self._l1i_assoc, > icache_writeback_clean=False, > ) > for i in range(board.get_processor().get_num_cores()) > ] My L1PrivateCtrl is a library for my L1PrivateController simobject which has CacheParams icache_param as a local variable. For now, it just takes parameters for cache, assigns them in icache_param manually and instantiates cache objects with icache_param argument. Right now, I'm just trying to implement my controller object to work as an intermediary between cpu and icache. On Thu, Aug 17, 2023 at 2:37 PM Jason Lowe-Power wrote: > Hi Shaikhul, > > I think that you have somehow unset the `assoc` parameter (or set it to > None) in the cache. Can you provide us the exact script you're running, the > command line that you use to run, the information about the gem5 build > (variant used), the version of gem5 you're using, and any modifications you > have made to gem5? > > Thanks, > Jason > > On Wed, Aug 16, 2023 at 1:13 PM Khan Shaikhul Hadi via gem5-users < > gem5-users@gem5.org> wrote: > >> I have my dedicated controller module that have L1Icache as member >> function (I want to interfere with all incoming and outgoing request and >> response from cache and may modify based on some algorithm) with similler >> parameter and port structure. In PrivateL1SharedL2CacheHierarchy cache >> hierarchy, I wanted to change the L1Cache with This Controller module. When >> I run my configuration, I encountered this error: >> >> Error in unproxying param 'assoc' of board.cache_hierarchy.l1icaches0.tags >> AttributeError: Can't resolve proxy 'assoc' of type 'Int' from >> 'board.cache_hierarchy.l1icaches0.tags' >> >> Anyone have any idea how I could solve this? >> >> Note: To me it seems like, gem5 trying to create the structure of the >> system where the board trying to find the l1icaches0 directory and trying >> to get tag value from there. I could not find where it was happening. Also, >> I have found some questions where others faced the same type of error ( not >> exactly the same error) and creating a subsystem may resolve it. But I >> could not find any resources on how this subsystem structure works in gem5 >> and how to make your own subsystem. Any known resources on that? >> >> Best >> Shaikhul >> ___ >> gem5-users mailing list -- gem5-users@gem5.org >> To unsubscribe send an email to gem5-users-le...@gem5.org >> > ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] How to solve "AttributeError: Can't resolve proxy" error when l1icache is replaced with new module
I have my dedicated controller module that have L1Icache as member function (I want to interfere with all incoming and outgoing request and response from cache and may modify based on some algorithm) with similler parameter and port structure. In PrivateL1SharedL2CacheHierarchy cache hierarchy, I wanted to change the L1Cache with This Controller module. When I run my configuration, I encountered this error: Error in unproxying param 'assoc' of board.cache_hierarchy.l1icaches0.tags AttributeError: Can't resolve proxy 'assoc' of type 'Int' from 'board.cache_hierarchy.l1icaches0.tags' Anyone have any idea how I could solve this? Note: To me it seems like, gem5 trying to create the structure of the system where the board trying to find the l1icaches0 directory and trying to get tag value from there. I could not find where it was happening. Also, I have found some questions where others faced the same type of error ( not exactly the same error) and creating a subsystem may resolve it. But I could not find any resources on how this subsystem structure works in gem5 and how to make your own subsystem. Any known resources on that? Best Shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] How to connected CpuSidePort and MemSidePort within a simobject ( not in config file)
In my code I'll have a simobject which has its own cache. As classical cache use CpuSidePort and MemSidePort to receive and respond to request, I want to create some internal CpuSidePort and MemSidePort in my simobject like below > class SimObject : public ClockedObject > { > Cache cache; > CpuSidePort cacheMemSidePortConnection; > MemSidePort cacheCpuSidePortConnection; > // CpuSidePort and MemSidePort class could follow same structure as > BaseCache CpuSidePort and MemSidePort > ... > ... > ... > } My question is how could I connect this ports with cache such that when I schedule some request pkt using cacheCpuSidePortConnection, cache's cpuSidePort will catch that packet, when cache's memSidePort schedule some req pkt, cacheMemSidePort will catch that pkt. In the front end, I could see in the library, we could do that using param ( cache.cpu_side_port = cpu.mem_side_port). But could not find any reference that connects to port within a simobject. Any suggestions or resources which I could follow ? Best Shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Re: how clflush is simulated in classic cache ( not ruby ) ?
But my gdb traces showing that request->isMemAccessRequired() is returning false. That's where I'm confused. I'm running this simulation in SE mode. Shaikhul [image: ismemAccessRequired.png] On Tue, Aug 1, 2023 at 5:52 PM Eliot Moss wrote: > On 8/1/2023 5:15 PM, Khan Shaikhul Hadi via gem5-users wrote: > > As far as I understand, gem5 simulates functionality of clflush > instruction for classic cache. Can > > anyone explain how it do that ? > > > > I traced Clflushopt::initiateAcc() function call which eventually > calls LSQ::pushRequest() function > > in lsq.cc. But after completion of translation, it checks > request->isMemAccessRequired() and isLoad > > both of which returns falls. As a result it does not call write() > function which should put the > > instruction in store queue, instead just return inst->getFault(). > > > > Without placing this request in the store queue, how does this request > reach the cache to invalid > > the block ? > > Where gem5 get's timing for this clflush instruction ? > > My reading of the code suggests that request->isMemAccessRequired() will > return true, since > this is a request. Things will then move on to do the write. Eventually > a suitable packet > will be sent to memory (interestingly, it carries no data). > > HTH > > Eliot Moss > ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] how clflush is simulated in classic cache ( not ruby ) ?
As far as I understand, gem5 simulates functionality of clflush instruction for classic cache. Can anyone explain how it do that ? I traced Clflushopt::initiateAcc() function call which eventually calls LSQ::pushRequest() function in lsq.cc. But after completion of translation, it checks request->isMemAccessRequired() and isLoad both of which returns falls. As a result it does not call write() function which should put the instruction in store queue, instead just return inst->getFault(). Without placing this request in the store queue, how does this request reach the cache to invalid the block ? Where gem5 get's timing for this clflush instruction ? Best Shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Can't explain timing result for flush and fence in classical cache hierarchy
In my configuration I used CPUTypes.O3 and PrivateL1SharedL2CacheHeirarchy to check how clflush and fence impacts the timing of workload. In my workload I run 10,000 iteration to update an array value, 200 updates per thread. In workload, I have : for( ;index ARR[index]=thread_ID; > FLUSH([index]); > FENCE; > } > Where FLUSH (macro for _mm_clflush) should take more time to complete than ARR[index+1]=thread_ID as this memory update should be highly localized and flush needs to get acknowledgement from all levels of cache before complete. So, FENCE should have much more penalty for flush compared to write operation. So, I was hoping to see a high execution time increase for insertion of fences in the second scenario. But insertion of the fence only increases 2% execution time which is counter intuitive. Can anyone explain why I'm seeing this behaviour ? As far as I understand, the memory fence should let the following instruction execute after all previous instructions are completed and removed from the store buffer in which case clflush should take more time than regular write operation. Best Shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Re: Fatal error for when clflush is included in workload for O3 system simulation
Hi, Thank you for your response with the patch link. It helped me a lot to understand what's going on and limitations with clflush. Do you have any idea if clflush alternative for arm isa is implemented in gem5 properly or not. I work on persistent memory and for x86 isa, you need clflush and fence ( which also may not be implemented properly). If I move to arm, clflush and fence should be replaced with similar functionality instruction ( I have no idea about arm isa, sorry can't mention explicit instruction. Most likely DC CVAU and memory barrier in arm isa ). Best Shaikhul On Sun, Jun 18, 2023 at 2:55 PM Ayaz Akram wrote: > Hi Shaikhul, > > I think clflush is not supported in Ruby caches at the moment. For > reference, here is the original patch that added support for clflush > instruction in gem5: > > arch-x86: Adding clflush, clflushopt, clwb instructions (7401) · Gerrit > Code Review (googlesource.com) > <https://gem5-review.googlesource.com/c/public/gem5/+/7401> > > -Ayaz > > On Fri, Jun 16, 2023 at 2:49 PM Khan Shaikhul Hadi via gem5-users < > gem5-users@gem5.org> wrote: > >> Hi, >> When I include "clflush" instruction for out-of-order execution >> simulation in MESITwoLevelCacheHeirarchy, it givis following error: >> >> build/X86/mem/ruby/system/RubyPort.cc:433: fatal: Ruby functional read >>> failed for address 0x1f2b80 >> >> >> Commenting out clflush operations seems to solve the problem. Any idea >> why it could happen? does CLFLUSH properly implemented in Two Level MESI >> protocol ? >> >> I'm not proficient in gem5 debugging, but what I've observed is that this >> fatal error is caused by some read request packet, surprisingly not any >> flush request. Also, If I commented out the fatal error message, system >> gives warning as follows: >> >> build/X86/mem/ruby/system/RubyPort.cc:267: warn: Cache maintenance >>> operations are not supported in Ruby. >> >> >> and completes execution, no crash or other type of error message. I don't >> know what "Cache maintenance operations" means here and is it related to >> the previous error. >> >> I'm pretty confused and can't figure out how to approach. Anyone have any >> suggestions ? I want to understand how clflush operation is simulated in >> gem5 so that I could modify it for my research needs. I have attached my >> configuration script and workload C++ file here. >> ___ >> gem5-users mailing list -- gem5-users@gem5.org >> To unsubscribe send an email to gem5-users-le...@gem5.org >> > ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Persistent memory in gem5: How to test persistent memory workload properly.
Hi, I want to simulate a Persistent Memory machine in gem5. Gem5 has an NVMe module but at instruction level ,for most part, it does not simulate CLFLUSH ( specially for MESI cache coherence protocol ). I am also not sure if it simulates memory fence properly (For out of order cpu, it seems like MFenceOp.execute just returns no fault without doing anything. I was expecting it would do something to ensure the store buffer is clean before other instructions could proceed or something like that.). In that case, how one runs a persistent memory benchmark in gem5. Side Note : To make a update persistent in X86 architecture, an update must follow by FLUSH and FENCE. Best Shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] Fatal error for when clflush is included in workload for O3 system simulation
Hi, When I include "clflush" instruction for out-of-order execution simulation in MESITwoLevelCacheHeirarchy, it givis following error: build/X86/mem/ruby/system/RubyPort.cc:433: fatal: Ruby functional read > failed for address 0x1f2b80 Commenting out clflush operations seems to solve the problem. Any idea why it could happen? does CLFLUSH properly implemented in Two Level MESI protocol ? I'm not proficient in gem5 debugging, but what I've observed is that this fatal error is caused by some read request packet, surprisingly not any flush request. Also, If I commented out the fatal error message, system gives warning as follows: build/X86/mem/ruby/system/RubyPort.cc:267: warn: Cache maintenance > operations are not supported in Ruby. and completes execution, no crash or other type of error message. I don't know what "Cache maintenance operations" means here and is it related to the previous error. I'm pretty confused and can't figure out how to approach. Anyone have any suggestions ? I want to understand how clflush operation is simulated in gem5 so that I could modify it for my research needs. I have attached my configuration script and workload C++ file here. #include #include #include #define PRINT 1 #define NUMBER_OF_THREAD 1 #define ArrSize 6 #if PRINT #define PrintValue(arg,...) {printf(" sample code " arg "\n",__VA_ARGS__);} #else #define PrintValue(arg,...){} #endif #define FLUSH(arg) {_mm_clflush(arg);} #define FENCE __asm__("mfence") uint16_t* ARR = NULL; uint64_t COUNTER; inline void persis_barrier(){__asm__("mfence");} void *threadFunction(void *argp){ uint16_t thread_ID=*(uint16_t*)(argp); PrintValue("Inside Thread %d",thread_ID); for( int index = thread_ID;index import os, argparse,sys from gem5.utils.requires import requires from gem5.components.boards.simple_board import SimpleBoard # from gem5.components.boards.x86_board import X86Board from gem5.components.memory.single_channel import SingleChannelDDR3_1600 from gem5.components.processors.cpu_types import CPUTypes from gem5.components.processors.simple_processor import SimpleProcessor # from gem5.components.processors.simple_switchable_processor import SimpleSwitchableProcessor from gem5.resources.resource import( CustomResource, Resource, CustomDiskImageResource ) from gem5.resources.workload import Workload from gem5.simulate.simulator import Simulator from gem5.simulate.exit_event import ExitEvent from gem5.isas import ISA from gem5.coherence_protocol import CoherenceProtocol from gem5.components.cachehierarchies.ruby.mesi_two_level_cache_hierarchy import MESITwoLevelCacheHierarchy # # Parameters ## # #Memory L1Cache = "32KiB" # same size data and instruction cache Asso_L1 = 4 # associativity Asso_L2 = 8 L2Cache = "256KiB" # 8 times of L1 L3Cache = "4096KiB" #16 times of L2 DRAM_size = "1GiB" ## config for interconnect netowrk number_of_cpu = 4 number_of_dir = 4 network="garnet" topology="Mesh_XY" mesh_row=4 ClkFreq = "3GHz" isa=ISA.X86 ## requires( isa_required=ISA.X86, coherence_protocol_required=CoherenceProtocol.MESI_TWO_LEVEL, kvm_required=True, ) cache_hierarchy = MESITwoLevelCacheHierarchy( l1i_size=L1Cache, l1i_assoc=Asso_L1, l1d_size=L1Cache, l1d_assoc=Asso_L1, l2_size=L2Cache, l2_assoc=Asso_L2, num_l2_banks=1) memory = SingleChannelDDR3_1600(DRAM_size) processor = SimpleProcessor( cpu_type=CPUTypes.O3, num_cores=number_of_cpu,isa=isa) # board = SimpleBoard(clk_freq=ClkFreq, processor=processor, memory=memory, cache_hierarchy=cache_hierarchy) board = SimpleBoard(clk_freq=ClkFreq, processor=processor, memory=memory, cache_hierarchy=cache_hierarchy) #simulation setup # binary = CustomResource('tests/test-progs/hello/bin/x86/linux/hello') binary = CustomResource('/compiled/workload/directory/fp') board.set_se_binary_workload(binary) #simulation start simulator = Simulator(board=board) # simulator = Simulator(board=board, #on_exit_event={ #ExitEvent.EXIT : (func() for func in [processor.switch]), #},) simulator.run() Makefile Description: Binary data ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org
[gem5-users] How clflush execution works in gem5 ?
Hi, I'm trying to figure out how "clflush" instruction works in gem5. Specially, how it issues a signal to the cache controller to evict the block from cache hierarchy throughout the system and how it receives confirmation to clean the store buffer so that the next fence let following instructions to proceed. Anyone have any idea how this works or where I should look for better understanding ? I have tried to trace clflush execution and found some confusing facts. It would be great if anyone could clarify this. 1. "clflush" instruction execution eventually calls Clflushopt::initiateAcc() (build/X86/arch/x86/generated/exec-ns.cc.inc ) as macroop definition of CLFLUSH uses clflushopt. So, there is no dedicated clflush operation in gem5 but all flush operations are treated as clflushopt ? 2. When Clflushopt::initiateAcc() executes in timing simulation ( CPUType::TIMING), it eventually calls TimingSimpleCPU::writeMem() function in src/cpu/simple/timing.cc. Here you have : if (data == NULL) { > assert(flags & Request::STORE_NO_DATA); > // This must be a cache block cleaning request > memset(newData, 0, size); > } else { > memcpy(newData, data, size); > } > So, I was assuming it will have data==NULL and execute memset() but it actually executes memcpy(). This seems weird. Am I missing something ? 3. For out-of-order simulation (CPUType.O3), Clflush::initiateAcc() is called twice the number of clflush instructions in my workload. For example, if my workload has 6 clflush instructions, gdb breakpoint at Clflush::initiateAcc shows that this function is called 12 times (timing simulation called this function 6 times as it should). Can anyone explain what happens here? Best shaikhul ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org