[gem5-users] Re: how to run program on gem5 across different number of cycles
Hello, When you call the simulate() function in python, it takes a single parameter, the number of *ticks* to execute. Ticks are usually 1ps. So, it depends on your cycle time what you would pass to simulate. There's a useful function for converting, though. Say you have a clock domain like the following: ``` system.clk_domain = SrcClockDomain() system.clk_domain.clock = '1GHz' system.clk_domain.voltage_domain = VoltageDomain() ``` You can get the number of ticks per cycle by calling the `getValue` function. ``` m5.instantiate() print("Ticks per cycle: ", system.clk_domain.clock.getValue()) ``` So, if you want to run for 1000 cycles, you can call simulate() like below: ``` simulate(1000*system.clk_domain.clock.getValue()) ``` See http://www.gem5.org/documentation/learning_gem5/part1/simple_config/ for more details on how to create your own config files. Cheers, Jason On Mon, Jun 22, 2020 at 8:05 PM ABD ALRHMAN ABO ALKHEEL via gem5-users < gem5-users@gem5.org> wrote: > Hi All, > > I want to run a program on GEM5 se mode across a different number of > cycles, for example, 1K cycles, 10K, 100K...etc. > > How I can do that? > > Thanks > > > ___ > gem5-users mailing list -- gem5-users@gem5.org > To unsubscribe send an email to gem5-users-le...@gem5.org > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] how to run program on gem5 across different number of cycles
Hi All, I want to run a program on GEM5 se mode across a different number of cycles, for example, 1K cycles, 10K, 100K...etc. How I can do that? Thanks ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] Re: GCN3 GPU Simulation Start-Up Time
Sounds good. I'll generate a patch for the staging branch in the next few days. Dan On Mon, Jun 22, 2020 at 2:50 PM Matt Sinclair wrote: > In my opinion, adding support for HSA_PACKET_TYPE_AGENT_DISPATCH, > irrelevant of the current issues, is worthwhile and helpful to push. > > If you have a minimum working example of how you change the benchmark, > that would be helpful too. > > Kyle R. has spent a bunch of time trying to identify the source of the > problem within the synchronize call, but thus far we haven't found anything > concrete. So for now, having this workaround would definitely be helpful > for the community. > > Matt > > On Mon, Jun 22, 2020 at 1:25 PM Daniel Gerzhoy > wrote: > >> Hey Matt, >> >> Happy to do that if you think it's viable, but I have to say my >> workaround is pretty hack-y. There are definitely some benchmark changes on >> top of changes to the simulator. >> >> Let me describe it for you and then if you still think it's a good idea >> I'll make a patch. >> >> My workaround relies on the fact that: >> 1. Launching a kernel sets up a completion signal that >> hipDeviceSynchronize() ultimately waits on. >> 2. All you need in the benchmark is that completion signal to know that >> your kernel is complete. >> >> So I basically implement the HSA_PACKET_TYPE_AGENT_DISPATCH in >> hsa_packet_processor.cc and gpu_command_processor.cc to receive commands >> from the benchmark directly. >> One of the commands is to steal the completion signal for a particular >> kernel and pass it back to the benchmark. >> >> After you launch the kernel (normally) you pass in that kernel's id (you >> have to keep track) then send a command to steal the completion signal. It >> gets passed back in the return_address member of the agent packet. >> >> In the benchmark I store that signal and use it to do a >> hsa_signal_wait_relaxed on it. >> And you have to do this every time you launch a kernel, you could >> conceivably overload the hipDeviceSynchronize() function to do this for >> you/keep track of kernel launches too. >> >> Let me know if you think this is still something you guys want. >> >> Cheers, >> >> Dan >> >> >> On Mon, Jun 22, 2020 at 2:04 PM Matt Sinclair >> wrote: >> >>> Hi Dan, >>> >>> Do you mind pushing your workaround with the completion signal as a >>> patch to the staging branch so we can take a look? Or is this just a >>> change to the program(s) itself? >>> >>> After Kyle's fix (which has been pushed as an update to my patch), we're >>> still seeing some hipDeviceSynchronize failures. So we're interested in >>> looking at what you did to see if it solves the problem. >>> >>> Matt >>> >>> On Fri, Jun 19, 2020 at 4:15 PM Kyle Roarty wrote: >>> Hi Dan, Another thing to try is to add and set the environment variable HIP_DB in apu_se.py (Line 461 for me, starts with "env = ['LD_LIBRARY_PATH...") . Setting HIP_DB=sync or HIP_DB=api has prevented crashing on hipDeviceSynchronize() calls for the applications I've tested. I had traced through this issue (or at least one that manifests the same way) a while back, and what I remember is that the crash happens somewhere in the HIP code, and it occurs because somewhere much earlier we go down a codepath that doesn't clear a register (I believe that was also in HIP code). That register then gets re-used until the error propagates to the register used in the ld instruction. Unfortunately, I had a hard time of getting consistent, manageable traces, so I wasn't able to figure out why we were going down the wrong codepath. Kyle -- *From:* mattdsincl...@gmail.com *Sent:* Friday, June 19, 2020 2:08 PM *To:* Daniel Gerzhoy *Cc:* GAURAV JAIN ; Kyle Roarty ; gem5 users mailing list *Subject:* Re: [gem5-users] GCN3 GPU Simulation Start-Up Time Thanks Dan. Kyle R. has found some things about the patch that we're testing and may need to be pushed pending those results. Fingers crossed that fix will help you too. As Gaurav mentioned previously, the spin flag did not always solve the problem for us -- seems like that is true for you too, although I don't remember square ever failing for us. I don't know exactly where that PC is coming from, I'd have to get a trace. But I suspect it's actually a GPU address being accessed by some instruction that's failing -- in the past when I've seen this kind of issue, it was happening because the kernel boundary was not being respected and code was running that shouldn't have been running yet. I don't know what your use case is, so it's possible that is not the issue for you -- a trace would be the only way to know for sure. Matt Regards, Matt Sinclair Assistant Professor University of Wisconsin-Madison Computer Sciences Department
[gem5-users] 2 level TLB in ARM Full System with Ruby
Hello All, I was wondering if there is a way to simulate a system with 2 levels of TLBs in full system simulation with ruby for ARM? I have seen other examples that use the classical memory model and use a cache as the second level TLB. Is there something similar that can be done in Ruby memory system. Can I use a standalone RubyCache as the second level TLB? Thank you very much in advance. Best Regards, Shehab ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] Running python based graph neural network application on gem5
Hello Everyone, I would like to run graph NN application on CPU based architecture, does gem5 support applications written in pytorch or tensorfow(python)? Also, if there is another benchmark suite that has Graph NN applications and can run on gem5, please do tell about it? PS: I had asked this question for gpu on gem5-gpu, but currently I wanted to check CPU ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] Re: GCN3 GPU Simulation Start-Up Time
In my opinion, adding support for HSA_PACKET_TYPE_AGENT_DISPATCH, irrelevant of the current issues, is worthwhile and helpful to push. If you have a minimum working example of how you change the benchmark, that would be helpful too. Kyle R. has spent a bunch of time trying to identify the source of the problem within the synchronize call, but thus far we haven't found anything concrete. So for now, having this workaround would definitely be helpful for the community. Matt On Mon, Jun 22, 2020 at 1:25 PM Daniel Gerzhoy wrote: > Hey Matt, > > Happy to do that if you think it's viable, but I have to say my workaround > is pretty hack-y. There are definitely some benchmark changes on top of > changes to the simulator. > > Let me describe it for you and then if you still think it's a good idea > I'll make a patch. > > My workaround relies on the fact that: > 1. Launching a kernel sets up a completion signal that > hipDeviceSynchronize() ultimately waits on. > 2. All you need in the benchmark is that completion signal to know that > your kernel is complete. > > So I basically implement the HSA_PACKET_TYPE_AGENT_DISPATCH in > hsa_packet_processor.cc and gpu_command_processor.cc to receive commands > from the benchmark directly. > One of the commands is to steal the completion signal for a particular > kernel and pass it back to the benchmark. > > After you launch the kernel (normally) you pass in that kernel's id (you > have to keep track) then send a command to steal the completion signal. It > gets passed back in the return_address member of the agent packet. > > In the benchmark I store that signal and use it to do a > hsa_signal_wait_relaxed on it. > And you have to do this every time you launch a kernel, you could > conceivably overload the hipDeviceSynchronize() function to do this for > you/keep track of kernel launches too. > > Let me know if you think this is still something you guys want. > > Cheers, > > Dan > > > On Mon, Jun 22, 2020 at 2:04 PM Matt Sinclair > wrote: > >> Hi Dan, >> >> Do you mind pushing your workaround with the completion signal as a patch >> to the staging branch so we can take a look? Or is this just a change to >> the program(s) itself? >> >> After Kyle's fix (which has been pushed as an update to my patch), we're >> still seeing some hipDeviceSynchronize failures. So we're interested in >> looking at what you did to see if it solves the problem. >> >> Matt >> >> On Fri, Jun 19, 2020 at 4:15 PM Kyle Roarty wrote: >> >>> Hi Dan, >>> >>> Another thing to try is to add and set the environment variable HIP_DB >>> in apu_se.py (Line 461 for me, starts with "env = ['LD_LIBRARY_PATH...") . >>> Setting HIP_DB=sync or HIP_DB=api has prevented crashing on >>> hipDeviceSynchronize() calls for the applications I've tested. >>> >>> I had traced through this issue (or at least one that manifests the same >>> way) a while back, and what I remember is that the crash happens somewhere >>> in the HIP code, and it occurs because somewhere much earlier we go down a >>> codepath that doesn't clear a register (I believe that was also in HIP >>> code). That register then gets re-used until the error propagates to the >>> register used in the ld instruction. Unfortunately, I had a hard time of >>> getting consistent, manageable traces, so I wasn't able to figure out why >>> we were going down the wrong codepath. >>> >>> Kyle >>> -- >>> *From:* mattdsincl...@gmail.com >>> *Sent:* Friday, June 19, 2020 2:08 PM >>> *To:* Daniel Gerzhoy >>> *Cc:* GAURAV JAIN ; Kyle Roarty ; >>> gem5 users mailing list >>> *Subject:* Re: [gem5-users] GCN3 GPU Simulation Start-Up Time >>> >>> Thanks Dan. Kyle R. has found some things about the patch that we're >>> testing and may need to be pushed pending those results. Fingers crossed >>> that fix will help you too. >>> >>> As Gaurav mentioned previously, the spin flag did not always solve the >>> problem for us -- seems like that is true for you too, although I don't >>> remember square ever failing for us. >>> >>> I don't know exactly where that PC is coming from, I'd have to get a >>> trace. But I suspect it's actually a GPU address being accessed by some >>> instruction that's failing -- in the past when I've seen this kind of >>> issue, it was happening because the kernel boundary was not being respected >>> and code was running that shouldn't have been running yet. I don't know >>> what your use case is, so it's possible that is not the issue for you -- a >>> trace would be the only way to know for sure. >>> >>> Matt >>> >>> Regards, >>> Matt Sinclair >>> Assistant Professor >>> University of Wisconsin-Madison >>> Computer Sciences Department >>> cs.wisc.edu/~sinclair >>> >>> On Wed, Jun 17, 2020 at 10:30 AM Daniel Gerzhoy < >>> daniel.gerz...@gmail.com> wrote: >>> >>> Hey Matt, >>> >>> Thanks for pushing those changes. I updated the head of the amd staging >>> branch and tried to run square. The time to get into main stays
[gem5-users] Re: GCN3 GPU Simulation Start-Up Time
Hey Matt, Happy to do that if you think it's viable, but I have to say my workaround is pretty hack-y. There are definitely some benchmark changes on top of changes to the simulator. Let me describe it for you and then if you still think it's a good idea I'll make a patch. My workaround relies on the fact that: 1. Launching a kernel sets up a completion signal that hipDeviceSynchronize() ultimately waits on. 2. All you need in the benchmark is that completion signal to know that your kernel is complete. So I basically implement the HSA_PACKET_TYPE_AGENT_DISPATCH in hsa_packet_processor.cc and gpu_command_processor.cc to receive commands from the benchmark directly. One of the commands is to steal the completion signal for a particular kernel and pass it back to the benchmark. After you launch the kernel (normally) you pass in that kernel's id (you have to keep track) then send a command to steal the completion signal. It gets passed back in the return_address member of the agent packet. In the benchmark I store that signal and use it to do a hsa_signal_wait_relaxed on it. And you have to do this every time you launch a kernel, you could conceivably overload the hipDeviceSynchronize() function to do this for you/keep track of kernel launches too. Let me know if you think this is still something you guys want. Cheers, Dan On Mon, Jun 22, 2020 at 2:04 PM Matt Sinclair wrote: > Hi Dan, > > Do you mind pushing your workaround with the completion signal as a patch > to the staging branch so we can take a look? Or is this just a change to > the program(s) itself? > > After Kyle's fix (which has been pushed as an update to my patch), we're > still seeing some hipDeviceSynchronize failures. So we're interested in > looking at what you did to see if it solves the problem. > > Matt > > On Fri, Jun 19, 2020 at 4:15 PM Kyle Roarty wrote: > >> Hi Dan, >> >> Another thing to try is to add and set the environment variable HIP_DB in >> apu_se.py (Line 461 for me, starts with "env = ['LD_LIBRARY_PATH...") . >> Setting HIP_DB=sync or HIP_DB=api has prevented crashing on >> hipDeviceSynchronize() calls for the applications I've tested. >> >> I had traced through this issue (or at least one that manifests the same >> way) a while back, and what I remember is that the crash happens somewhere >> in the HIP code, and it occurs because somewhere much earlier we go down a >> codepath that doesn't clear a register (I believe that was also in HIP >> code). That register then gets re-used until the error propagates to the >> register used in the ld instruction. Unfortunately, I had a hard time of >> getting consistent, manageable traces, so I wasn't able to figure out why >> we were going down the wrong codepath. >> >> Kyle >> -- >> *From:* mattdsincl...@gmail.com >> *Sent:* Friday, June 19, 2020 2:08 PM >> *To:* Daniel Gerzhoy >> *Cc:* GAURAV JAIN ; Kyle Roarty ; >> gem5 users mailing list >> *Subject:* Re: [gem5-users] GCN3 GPU Simulation Start-Up Time >> >> Thanks Dan. Kyle R. has found some things about the patch that we're >> testing and may need to be pushed pending those results. Fingers crossed >> that fix will help you too. >> >> As Gaurav mentioned previously, the spin flag did not always solve the >> problem for us -- seems like that is true for you too, although I don't >> remember square ever failing for us. >> >> I don't know exactly where that PC is coming from, I'd have to get a >> trace. But I suspect it's actually a GPU address being accessed by some >> instruction that's failing -- in the past when I've seen this kind of >> issue, it was happening because the kernel boundary was not being respected >> and code was running that shouldn't have been running yet. I don't know >> what your use case is, so it's possible that is not the issue for you -- a >> trace would be the only way to know for sure. >> >> Matt >> >> Regards, >> Matt Sinclair >> Assistant Professor >> University of Wisconsin-Madison >> Computer Sciences Department >> cs.wisc.edu/~sinclair >> >> On Wed, Jun 17, 2020 at 10:30 AM Daniel Gerzhoy >> wrote: >> >> Hey Matt, >> >> Thanks for pushing those changes. I updated the head of the amd staging >> branch and tried to run square. The time to get into main stays about the >> same (5min) FYI. >> >> But the hipDeviceSynchronize() fails even when I add >> hipSetDeviceFlags(hipDeviceScheduleSpin); >> unfortunately. >> >> panic: Tried to read unmapped address 0x1853e78. >> PC: 0x752a966b, Instr: MOV_R_M : ld rdi, DS:[rbx + 0x8] >> >> Is that PC (0x752a966b( somewhere in the hip code or something? >> r in the emulated driver? The line between the simulator code and >> guest code is kind of blurry to me around there haha. >> >> Best, >> >> Dan >> >> On Mon, Jun 15, 2020 at 9:59 PM Matt Sinclair >> wrote: >> >> Hi Dan, >> >> Thanks for the update. Apologies for the delay, the patch didn't apply >> cleanly initially, but I have pushed the
[gem5-users] Re: GCN3 GPU Simulation Start-Up Time
Hi Dan, Do you mind pushing your workaround with the completion signal as a patch to the staging branch so we can take a look? Or is this just a change to the program(s) itself? After Kyle's fix (which has been pushed as an update to my patch), we're still seeing some hipDeviceSynchronize failures. So we're interested in looking at what you did to see if it solves the problem. Matt On Fri, Jun 19, 2020 at 4:15 PM Kyle Roarty wrote: > Hi Dan, > > Another thing to try is to add and set the environment variable HIP_DB in > apu_se.py (Line 461 for me, starts with "env = ['LD_LIBRARY_PATH...") . > Setting HIP_DB=sync or HIP_DB=api has prevented crashing on > hipDeviceSynchronize() calls for the applications I've tested. > > I had traced through this issue (or at least one that manifests the same > way) a while back, and what I remember is that the crash happens somewhere > in the HIP code, and it occurs because somewhere much earlier we go down a > codepath that doesn't clear a register (I believe that was also in HIP > code). That register then gets re-used until the error propagates to the > register used in the ld instruction. Unfortunately, I had a hard time of > getting consistent, manageable traces, so I wasn't able to figure out why > we were going down the wrong codepath. > > Kyle > -- > *From:* mattdsincl...@gmail.com > *Sent:* Friday, June 19, 2020 2:08 PM > *To:* Daniel Gerzhoy > *Cc:* GAURAV JAIN ; Kyle Roarty ; gem5 > users mailing list > *Subject:* Re: [gem5-users] GCN3 GPU Simulation Start-Up Time > > Thanks Dan. Kyle R. has found some things about the patch that we're > testing and may need to be pushed pending those results. Fingers crossed > that fix will help you too. > > As Gaurav mentioned previously, the spin flag did not always solve the > problem for us -- seems like that is true for you too, although I don't > remember square ever failing for us. > > I don't know exactly where that PC is coming from, I'd have to get a > trace. But I suspect it's actually a GPU address being accessed by some > instruction that's failing -- in the past when I've seen this kind of > issue, it was happening because the kernel boundary was not being respected > and code was running that shouldn't have been running yet. I don't know > what your use case is, so it's possible that is not the issue for you -- a > trace would be the only way to know for sure. > > Matt > > Regards, > Matt Sinclair > Assistant Professor > University of Wisconsin-Madison > Computer Sciences Department > cs.wisc.edu/~sinclair > > On Wed, Jun 17, 2020 at 10:30 AM Daniel Gerzhoy > wrote: > > Hey Matt, > > Thanks for pushing those changes. I updated the head of the amd staging > branch and tried to run square. The time to get into main stays about the > same (5min) FYI. > > But the hipDeviceSynchronize() fails even when I add > hipSetDeviceFlags(hipDeviceScheduleSpin); > unfortunately. > > panic: Tried to read unmapped address 0x1853e78. > PC: 0x752a966b, Instr: MOV_R_M : ld rdi, DS:[rbx + 0x8] > > Is that PC (0x752a966b( somewhere in the hip code or something? > r in the emulated driver? The line between the simulator code and > guest code is kind of blurry to me around there haha. > > Best, > > Dan > > On Mon, Jun 15, 2020 at 9:59 PM Matt Sinclair > wrote: > > Hi Dan, > > Thanks for the update. Apologies for the delay, the patch didn't apply > cleanly initially, but I have pushed the patch I promised previously. > Since I'm not sure if you're on the develop branch or the AMD staging > branch, I pushed it to both (there are some differences in code on the > branches, which I hope will be resolved over time as more of the commits > from the staging branch are pushed to develop: > > - develop: https://gem5-review.googlesource.com/c/public/gem5/+/30354 > - AMD staging: https://gem5-review.googlesource.com/c/amd/gem5/+/30335 > > I have validated that both of them compile, and asked Kyle R to test that > both of them a) don't break anything that is expected to work publicly with > the GPU and b) hopefully resolve some of the problems (like yours) with > barrier synchronization. Let us know if this solves your problem too -- > fingers crossed. > > Thanks, > Matt > > On Fri, Jun 12, 2020 at 2:47 PM Daniel Gerzhoy > wrote: > > Matt, > > It wasn't so much a solution as an explanation. Kyle was running on an r5 > 3600 (3.6-4.2 GHz) whereas I am on a Xeon Gold 5117 @ (2.0 - 2.8 GHz) > > The relative difference in clock speed seems to me to be a more reasonable > explanation for a slowdown from 1-1.5 minutes to ~5min (actual time before > min) than the 8 min (time before main + exit time) I was seeing before. > > I'll update to the latest branch and see if that speeds me up further. I'm > also going to try running on a faster machine as well though that will take > some setup-time. > > Gaurav, > > Thanks for the tip, that will be helpful in the meantime. > > Dan > > On Fri,
[gem5-users] Flush L1 cache periodically
Hello all, I am facing an assertion when I try to flush the dcache periodically, using a sheduling timing event. I have set up a recurring timing event, and on the handler of this event I try to flush the dcache or icache. For the icache scenario I just call the base.cc::memInvalidate(), see https://gem5.googlesource.com/public/gem5/+/96fce476785a834f102ae69a895e661cf08e47cd/src/mem/cache/base.cc#1585, while for dcache case first I call the writeback (see: https://gem5.googlesource.com/public/gem5/+/96fce476785a834f102ae69a895e661cf08e47cd/src/mem/cache/base.cc#1579) and then the cache invalidation. I have observed that for the dcache scenario the following assertion get triggered, see https://gem5.googlesource.com/public/gem5/+/96fce476785a834f102ae69a895e661cf08e47cd/src/mem/cache/base.cc#1322. Am I doing something wrong during the dl1 cache flushing procedure? Kind regards, Michail ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] Re: Muti-thread workload bug in new version
Hi Taiyu, This seems like a different bug than https://gem5.atlassian.net/browse/GEM5-332. We should probably open a new ticket on Jira. I'm a bit surprised our threads tests didn't catch this. We'll look into it. @Hoa Nguyen , can you create a ticket on Jira and see if you can reproduce this? Cheers, Jason On Sun, Jun 21, 2020 at 5:58 AM Taiyu Zhou via gem5-users < gem5-users@gem5.org> wrote: > Hi guys! > My gem5 version is 96fce476785a834f102ae69a895e661cf08e47cd which clone > from GitHub. > I am trying to run a muti-thread program in se mode. But I encountered a > bug. > My program is simple: > " > void f1() > { > char a[64]; > printf("hello\n"); > } > > int main() > { > std::thread threads[2]; // t1 is not a thread > for(int i=0;i<2;i++) > threads[i] = std::thread(f1); > for (auto& t: threads) { > t.join(); > } > > } > > “ > > My run cmd is > ./build/X86/gem5.opt configs/example/se.py -c > /home/ubuntu/taiyu/test_app/thread_t --mem-size=8GB --cpu-type=DerivO3CPU > --caches --l2cache -n 3” > > And the gem5 will report : > warn: ignoring syscall set_robust_list(...) > warn: ignoring syscall mprotect(...) > hello > hello > panic: panic condition !clobber occurred: EmulationPageTable::allocate: > addr 0x77471000 already mapped > Memory Usage: 8642200 KBytes > Program aborted at tick 1411052000 > --- BEGIN LIBC BACKTRACE --- > ./build/X86/gem5.opt(_Z15print_backtracev+0x28)[0xa977b8] > ./build/X86/gem5.opt(_Z12abortHandleri+0x46)[0xaa8ab6] > /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f46d3bad390] > /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38)[0x7f46d2552428] > /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f46d255402a] > ./build/X86/gem5.opt[0x807e0f] > ./build/X86/gem5.opt(_ZN18EmulationPageTable3mapEmmlm+0x93c)[0xa230cc] > ./build/X86/gem5.opt(_ZN8MemState10fixupFaultEm+0xd8)[0xaf1518] > > ./build/X86/gem5.opt(_ZN6X86ISA9PageFault6invokeEP13ThreadContextRK14RefCountingPtrI10StaticInstE+0xf8)[0x11d4a58] > > ./build/X86/gem5.opt(_ZN13DefaultCommitI9O3CPUImplE10commitHeadERK14RefCountingPtrI13BaseO3DynInstIS0_EEj+0x9a7)[0xc87ba7] > > ./build/X86/gem5.opt(_ZN13DefaultCommitI9O3CPUImplE11commitInstsEv+0x637)[0xc88907] > > ./build/X86/gem5.opt(_ZN13DefaultCommitI9O3CPUImplE6commitEv+0xc70)[0xc8b070] > ./build/X86/gem5.opt(_ZN13DefaultCommitI9O3CPUImplE4tickEv+0xc8)[0xc8bd88] > ./build/X86/gem5.opt(_ZN9FullO3CPUI9O3CPUImplE4tickEv+0x155)[0xc9d935] > ./build/X86/gem5.opt(_ZN10EventQueue10serviceOneEv+0x9d)[0xa9f84d] > ./build/X86/gem5.opt(_Z9doSimLoopP10EventQueue+0x7b)[0xac1a6b] > ./build/X86/gem5.opt(_Z8simulatem+0xc2a)[0xac2a0a] > ./build/X86/gem5.opt[0x17f8671] > ./build/X86/gem5.opt[0xb14faa] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x7852)[0x7f46d3e6a7b2] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x6ffd)[0x7f46d3e69f5d] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x6ffd)[0x7f46d3e69f5d] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x6ffd)[0x7f46d3e69f5d] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCode+0x19)[0x7f46d3e62de9] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x613b)[0x7f46d3e6909b] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x6ffd)[0x7f46d3e69f5d] > > /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x85c)[0x7f46d3fa111c] > --- END LIBC BACKTRACE — > >This problem will not occur in an older version which I clone from > GitHub in March. It is work in the older version. > > ___ > gem5-users mailing list -- gem5-users@gem5.org > To unsubscribe send an email to gem5-users-le...@gem5.org > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
[gem5-users] Running Trusted Firmware-A on gem5
Hi all, We’ve just released an Arm community blog post regarding Trusted Firmware-A support in gem5. This is present since gem5 v20.0.0.0, please see the post for details, and feel free to raise any questions. https://community.arm.com/developer/research/b/articles/posts/running-trusted-firmware-a-on-gem5 Kind regards, Adrian. IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ___ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s