Yes, at a particular level all the registers are flattened into a single
index space, with the FP regs after the ints, so it's likely that what
you're seeing are FP reg accesses (assuming you're not using the O3 model
and seeing physical register indices).

Steve

On Tue, Jul 24, 2012 at 8:17 AM, Yanqi Zhou <[email protected]> wrote:

> Dear all,
> I added flag to generate run time trace files in Alpha full system
> simulation. As far as I know, there should be only 32 integer architecture
> registers in Alpha ISA. But in the trace file, there are instructions using
> r39,  r36, r37, and so on. Does anyone notice the same thing? Is it
> possible they are float point registers, and the trace file does not print
> out well?
>
> Best,
> Yanqi
> ________________________________________
> From: [email protected] [[email protected]] on behalf of
> [email protected] [[email protected]]
> Sent: Tuesday, July 24, 2012 11:10 AM
> To: [email protected]
> Subject: gem5-dev Digest, Vol 63, Issue 63
>
> Send gem5-dev mailing list submissions to
>         [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://m5sim.org/mailman/listinfo/gem5-dev
> or, via email, send a message with subject or body 'help' to
>         [email protected]
>
> You can reach the person managing the list at
>         [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gem5-dev digest..."
>
>
> Today's Topics:
>
>    1. Re: Review Request: Ruby: Clean up topology changes (Nilay Vaish)
>    2. Re: Review Request: syscall emulation: Clean up ioctl
>       handling, and implement for x86. (Marc Orr)
>    3. Re: Review Request: configs: add option for repeatedly
>       switching back-and-forth between cpu types. (Anthony Gutierrez)
>    4. Re: Review Request: configs: add option for repeatedly
>       switching back-and-forth between cpu types. (Anthony Gutierrez)
>    5. Cron <m5test@zizzer> /z/m5/regression/do-regression quick
>       (Cron Daemon)
>    6. Re: Review Request: Config: change how cpu class is set
>       (Nilay Vaish)
>    7. Re: Inorder switch on cache hit (Korey Sewell)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 23 Jul 2012 17:31:00 -0000
> From: "Nilay Vaish" <[email protected]>
> To: "Nilay Vaish" <[email protected]>, "Jason Power"
>         <[email protected]>,   "Default" <[email protected]>
> Subject: Re: [gem5-dev] Review Request: Ruby: Clean up topology
>         changes
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1308/#review3132
> -----------------------------------------------------------
>
> Ship it!
>
>
> Looks fine to me.
>
> - Nilay Vaish
>
>
> On July 23, 2012, 7:45 a.m., Jason Power wrote:
> >
> > -----------------------------------------------------------
> > This is an automatically generated e-mail. To reply, visit:
> > http://reviews.gem5.org/r/1308/
> > -----------------------------------------------------------
> >
> > (Updated July 23, 2012, 7:45 a.m.)
> >
> >
> > Review request for Default.
> >
> >
> > Description
> > -------
> >
> > Changeset 9123:7166285d6ad2
> > ---------------------------
> > Ruby: Clean up topology changes
> > Moved instantiateTopology into Ruby.py and removed the topologies
> directory
> > I'm not completely sure why the topology creation was in the src/
> directory,
> > but I think it was because previously the python needed to be compiled.
> >
> > Also added some extra inheritance to the topologies to clean up some
> issues
> > in the legacy topologies
> >
> >
> > Diffs
> > -----
> >
> >   configs/ruby/Ruby.py 48eeef8a0997
> >   configs/topologies/BaseTopology.py 48eeef8a0997
> >   configs/topologies/Cluster.py 48eeef8a0997
> >   configs/topologies/Crossbar.py 48eeef8a0997
> >   configs/topologies/Mesh.py 48eeef8a0997
> >   configs/topologies/MeshDirCorners.py 48eeef8a0997
> >   configs/topologies/Pt2Pt.py 48eeef8a0997
> >   configs/topologies/Torus.py 48eeef8a0997
> >   src/mem/ruby/network/topologies/SConscript 48eeef8a0997
> >   src/mem/ruby/network/topologies/TopologyCreator.py 48eeef8a0997
> >
> > Diff: http://reviews.gem5.org/r/1308/diff/
> >
> >
> > Testing
> > -------
> >
> >
> > Thanks,
> >
> > Jason Power
> >
> >
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 23 Jul 2012 18:12:39 -0000
> From: "Marc Orr" <[email protected]>
> To: "Marc Orr" <[email protected]>, "Default" <[email protected]>,
>         "Steve Reinhardt" <[email protected]>
> Subject: Re: [gem5-dev] Review Request: syscall emulation: Clean up
>         ioctl handling, and implement for x86.
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1318/#review3133
> -----------------------------------------------------------
>
>
>
> src/sim/syscall_emul.hh
> <http://reviews.gem5.org/r/1318/#comment3289>
>
>     Why is this fatal being changed to a warn? I'm not necessarily against
> this, but I thought a lot of the effort and controversy generated by this
> patch was that unsupported ioctl's should cause a fatal.
>
>
> - Marc Orr
>
>
> On July 22, 2012, 5:29 p.m., Steve Reinhardt wrote:
> >
> > -----------------------------------------------------------
> > This is an automatically generated e-mail. To reply, visit:
> > http://reviews.gem5.org/r/1318/
> > -----------------------------------------------------------
> >
> > (Updated July 22, 2012, 5:29 p.m.)
> >
> >
> > Review request for Default.
> >
> >
> > Description
> > -------
> >
> > [Note: this is an updated version of Marc's patch #1187.  I realized I
> hadn't pushed that, but when I went to test it, it didn't compile for ARM.
>  I ended up doing some more restructuring in the process of fixing that
> problem.]
> >
> > syscall emulation: Clean up ioctl handling, and implement for x86.
> >
> > Enable different whitelists for different OS/arch combinations,
> > since some use the generic Linux definitions only, and others
> > use definitions inherited from earlier Unix flavors on those
> > architectures.
> >
> > Also update x86 function pointers so ioctl is no longer
> > unimplemented on that platform.
> >
> > This patch is a revised version of Vince Weaver's earlier patch.
> >
> >
> > Diffs
> > -----
> >
> >   src/arch/alpha/linux/linux.hh UNKNOWN
> >   src/arch/alpha/tru64/tru64.hh UNKNOWN
> >   src/arch/arm/linux/linux.hh UNKNOWN
> >   src/arch/mips/linux/linux.hh UNKNOWN
> >   src/arch/power/linux/linux.hh UNKNOWN
> >   src/arch/sparc/linux/linux.hh UNKNOWN
> >   src/arch/x86/linux/syscalls.cc UNKNOWN
> >   src/kern/linux/linux.hh UNKNOWN
> >   src/sim/syscall_emul.hh UNKNOWN
> >
> > Diff: http://reviews.gem5.org/r/1318/diff/
> >
> >
> > Testing
> > -------
> >
> > passes 'util/regress quick'
> >
> >
> > Thanks,
> >
> > Steve Reinhardt
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 24 Jul 2012 00:39:31 -0000
> From: "Anthony Gutierrez" <[email protected]>
> To: "Steve Reinhardt" <[email protected]>,       "Anthony Gutierrez"
>         <[email protected]>, "Default" <[email protected]>
> Subject: Re: [gem5-dev] Review Request: configs: add option for
>         repeatedly switching back-and-forth between cpu types.
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1311/
> -----------------------------------------------------------
>
> (Updated July 23, 2012, 5:39 p.m.)
>
>
> Review request for Default.
>
>
> Description (updated)
> -------
>
> Changeset 9130:75db2cd47361
> ---------------------------
> configs: add option for repeatedly switching back-and-forth between cpu
> types.
>
> This patch adds a --repeat-switch option that will enable repeat core
> switching at a user defined period (set with --switch-freq option).
> currently, a switch can only occur between like CPU types. inorder CPU
> switching is not supported.
>
> *note*
> this patch simply allows a config that will perform repeat switching, it
> does not fix drain/switchout functionality. if you run with repeat
> switching
> you will hit assertion failures and/or your workload with hang or die.
>
>
> Diffs (updated)
> -----
>
>   configs/common/Options.py b57966a6c51249cf17fb9633a00d055fc260d72b
>   configs/common/Simulation.py b57966a6c51249cf17fb9633a00d055fc260d72b
>
> Diff: http://reviews.gem5.org/r/1311/diff/
>
>
> Testing
> -------
>
> The options added to this config play nicely with the other options.
> Ran with repeat switching on with various other options. Simulation
> started and ran successfully while switching until an assertion failed
> or the workload kernel panicked or hung due to drain/switchout bugs.
>
>
> Thanks,
>
> Anthony Gutierrez
>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 24 Jul 2012 00:40:43 -0000
> From: "Anthony Gutierrez" <[email protected]>
> To: "Steve Reinhardt" <[email protected]>,       "Anthony Gutierrez"
>         <[email protected]>, "Default" <[email protected]>
> Subject: Re: [gem5-dev] Review Request: configs: add option for
>         repeatedly switching back-and-forth between cpu types.
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
>
>
> > On July 22, 2012, 7:19 p.m., Steve Reinhardt wrote:
> > > configs/common/Simulation.py, line 167
> > > <http://reviews.gem5.org/r/1311/diff/2/?file=27986#file27986line167>
> > >
> > >     It looks like some of this added code duplicates code above... can
> we get away with reusing the switch_cpus list instead of creating a new
> repeat_switch_cpus list?
>
> I did this because I wanted to make it easy for people use this
> functionality with --fast-forward however, I didn't add in --fast-forward
> support with --repeat-switch. It's there now.
>
>
> - Anthony
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1311/#review3128
> -----------------------------------------------------------
>
>
> On July 23, 2012, 5:39 p.m., Anthony Gutierrez wrote:
> >
> > -----------------------------------------------------------
> > This is an automatically generated e-mail. To reply, visit:
> > http://reviews.gem5.org/r/1311/
> > -----------------------------------------------------------
> >
> > (Updated July 23, 2012, 5:39 p.m.)
> >
> >
> > Review request for Default.
> >
> >
> > Description
> > -------
> >
> > Changeset 9130:75db2cd47361
> > ---------------------------
> > configs: add option for repeatedly switching back-and-forth between cpu
> types.
> >
> > This patch adds a --repeat-switch option that will enable repeat core
> > switching at a user defined period (set with --switch-freq option).
> > currently, a switch can only occur between like CPU types. inorder CPU
> > switching is not supported.
> >
> > *note*
> > this patch simply allows a config that will perform repeat switching, it
> > does not fix drain/switchout functionality. if you run with repeat
> switching
> > you will hit assertion failures and/or your workload with hang or die.
> >
> >
> > Diffs
> > -----
> >
> >   configs/common/Options.py b57966a6c51249cf17fb9633a00d055fc260d72b
> >   configs/common/Simulation.py b57966a6c51249cf17fb9633a00d055fc260d72b
> >
> > Diff: http://reviews.gem5.org/r/1311/diff/
> >
> >
> > Testing
> > -------
> >
> > The options added to this config play nicely with the other options.
> > Ran with repeat switching on with various other options. Simulation
> > started and ran successfully while switching until an assertion failed
> > or the workload kernel panicked or hung due to drain/switchout bugs.
> >
> >
> > Thanks,
> >
> > Anthony Gutierrez
> >
> >
>
>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 24 Jul 2012 03:40:10 -0400
> From: [email protected] (Cron Daemon)
> To: [email protected]
> Subject: [gem5-dev] Cron <m5test@zizzer>
>         /z/m5/regression/do-regression quick
> Message-ID: <E1StZj0-0004Jd-51@zizzer>
> Content-Type: text/plain; charset=ANSI_X3.4-1968
>
> ***** build/ALPHA/tests/opt/quick/se/20.eio-short/alpha/eio/simple-atomic
> FAILED!
> ***** build/ALPHA/tests/opt/quick/se/30.eio-mp/alpha/eio/simple-atomic-mp
> FAILED!
> ***** build/ALPHA/tests/opt/quick/se/20.eio-short/alpha/eio/simple-timing
> FAILED!
> ***** build/ALPHA/tests/opt/quick/se/30.eio-mp/alpha/eio/simple-timing-mp
> FAILED!
> scons: `build/ALPHA_MOESI_hammer/tests/opt/quick/fs' is up to date.
> scons: `build/ALPHA_MESI_CMP_directory/tests/opt/quick/fs' is up to date.
> scons: `build/ALPHA_MOESI_CMP_directory/tests/opt/quick/fs' is up to date.
> scons: `build/ALPHA_MOESI_CMP_token/tests/opt/quick/fs' is up to date.
> scons: `build/MIPS/tests/opt/quick/fs' is up to date.
> scons: *** Error 1
> scons: *** Error 1
> scons: *** Error 1
> scons: *** Error 1
> scons: `build/POWER/tests/opt/quick/fs' is up to date.
> scons: `build/X86_MESI_CMP_directory/tests/opt/quick/se' is up to date.
> scons: `build/X86_MESI_CMP_directory/tests/opt/quick/fs' is up to date.
> *****
> build/ALPHA_MOESI_hammer/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_hammer
> passed.
> *****
> build/ALPHA_MOESI_hammer/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_hammer
> passed.
> *****
> build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_token
> passed.
> *****
> build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_token
> passed.
> *****
> build/ALPHA_MOESI_hammer/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_hammer
> passed.
> *****
> build/ALPHA_MESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MESI_CMP_directory
> passed.
> ***** build/ALPHA/tests/opt/quick/se/01.hello-2T-smt/alpha/linux/o3-timing
> passed.
> *****
> build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_directory
> passed.
> *****
> build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_directory
> passed.
> *****
> build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_directory
> passed.
> ***** build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-timing
> passed.
> ***** build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-timing-ruby
> passed.
> ***** build/MIPS/tests/opt/quick/se/00.hello/mips/linux/o3-timing passed.
> ***** build/MIPS/tests/opt/quick/se/00.hello/mips/linux/inorder-timing
> passed.
> ***** build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-atomic
> passed.
> ***** build/ALPHA/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/inorder-timing
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-atomic
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-atomic
> passed.
> *****
> build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-atomic
> passed.
> *****
> build/ALPHA_MESI_CMP_directory/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MESI_CMP_directory
> passed.
> *****
> build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby
> passed.
> *****
> build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-atomic-dual
> passed.
> *****
> build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby
> passed.
> *****
> build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-timing
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/o3-timing passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/o3-timing passed.
> *****
> build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-timing-dual
> passed.
> ***** build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-timing
> passed.
> *****
> build/ALPHA_MESI_CMP_directory/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MESI_CMP_directory
> passed.
> *****
> build/ALPHA_MESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MESI_CMP_directory
> passed.
> *****
> build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_token
> passed.
> *****
> build/ALPHA_MOESI_hammer/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_hammer
> passed.
> ***** build/POWER/tests/opt/quick/se/00.hello/power/linux/o3-timing passed.
> ***** build/POWER/tests/opt/quick/se/00.hello/power/linux/simple-atomic
> passed.
> ***** build/ALPHA/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby
> passed.
> *****
> build/ALPHA/tests/opt/quick/fs/80.netperf-stream/alpha/linux/twosys-tsunami-simple-atomic
> passed.
> ***** build/ALPHA/tests/opt/quick/se/50.memtest/alpha/linux/memtest passed.
> *****
> build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_token
> passed.
> *****
> build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_directory
> passed.
> ***** build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/o3-timing
> passed.===== Statistics differences =====
> ***** build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-timing-ruby
> passed.
> ***** build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/simple-timing
> passed.
> ***** build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/simple-atomic
> passed.
> *****
> build/SPARC/tests/opt/quick/se/40.m5threads-test-atomic/sparc/linux/simple-atomic-mp
> passed.
> *****
> build/SPARC/tests/opt/quick/se/40.m5threads-test-atomic/sparc/linux/o3-timing-mp
> passed.
> ***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic passed.
> *****
> build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker
> passed.
> ***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker
> passed.
> ***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing passed.
> ***** build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing passed.
> ***** build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-atomic passed.
> ***** build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-timing passed.
> ***** build/X86/tests/opt/quick/se/00.hello/x86/linux/o3-timing passed.
> *****
> build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
> passed.
> *****
> build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/simple-timing-ruby
> passed.
> *****
> build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic-dual
> passed.
> ***** build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/inorder-timing
> passed.
> *****
> build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/inorder-timing
> passed.
> ***** build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/simple-atomic
> passed.
> *****
> build/SPARC/tests/opt/quick/se/40.m5threads-test-atomic/sparc/linux/simple-timing-mp
> passed.
> ***** build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/simple-timing
> passed.
> *****
> build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-timing-dual
> passed.
> *****
> build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-timing
> passed.
> *****
> build/X86/tests/opt/quick/fs/10.linux-boot/x86/linux/pc-simple-atomic
> passed.
> *****
> build/X86/tests/opt/quick/fs/10.linux-boot/x86/linux/pc-simple-timing
> passed.
>
> See /z/m5/regression/regress-2012-07-24-03:00:01 for details.
>
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 24 Jul 2012 15:05:04 -0000
> From: "Nilay Vaish" <[email protected]>
> To: "Nilay Vaish" <[email protected]>, "Default" <[email protected]>
> Subject: Re: [gem5-dev] Review Request: Config: change how cpu class
>         is set
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1319/
> -----------------------------------------------------------
>
> (Updated July 24, 2012, 8:05 a.m.)
>
>
> Review request for Default.
>
>
> Description (updated)
> -------
>
> Changeset 9130:79148c86d9a5
> ---------------------------
> Config: change how cpu class is set
> This changes the way in which the cpu class while restoring from a
> checkpoint is set. Earlier it was assumed if cpu type with which
> to restore is not same as the cpu type with the which to run the
> simulation, then the checkpoint should be restored with the atomic
> cpu. This assumption is being dropped. The checkpoint can now be
> restored with any cpu type, the default being atomic cpu.
>
>
> Diffs (updated)
> -----
>
>   configs/common/Simulation.py b57966a6c512
>
> Diff: http://reviews.gem5.org/r/1319/diff/
>
>
> Testing
> -------
>
>
> Thanks,
>
> Nilay Vaish
>
>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 24 Jul 2012 08:11:42 -0700
> From: Korey Sewell <[email protected]>
> To: gem5 Developer List <[email protected]>
> Subject: Re: [gem5-dev] Inorder switch on cache hit
> Message-ID:
>         <
> cabvets6dderhoj3+dqhlseavfpn8ozm3ocyxwwtwmctvsk0...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> > I haven't solved the alteration issue yet.
> What was the roadblock to solving it the way I suggested originally?
> I'd be interested in why you cant delay the call to "setMemStall" in
> cache_unit.cc until the CompleteDataAccess command has been tried X
> times (with X being the cycles to the cache hit)
>
> > I want to solve it by postponing
> > the tick event in the amount of the cache latency. That way when the
> > complete cache access stage will check the arrival of the cache data only
> > after the cache had time to respond to the request.
> I'm not sure I follow this. Do you want to postpone the tick event of
> the CPU? If you do that, then what about all the other instructions in
> the pipeline that
> need to process? What about the next thread you are switching to?
> Wouldnt it need the CPU's tick to process?
>
> It seems to me that if you want to delay this structurally, then what
> you do is add the necessary pipeline stages between your cache request
> and expectancy
> of the cache hit to be finished.
>
> > Now, what I haven't figured out yet, is how to access the cycle time and
> the
> > cache latency parameters. The ones I change through the python script.
> What I would do is fix the simple case first. Pick a latency you want
> and hardcode it. When you get that working, then work to get that
> passed down through the
> Python.
>
>
>
> >
> > Thanks,
> > Yuval.
> >
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]] On
> Behalf
> > Of Korey Sewell
> > Sent: Sunday, 22 July, 2012 20:45
> > To: gem5 Developer List
> > Subject: Re: [gem5-dev] Inorder switch on cache hit
> >
> > Did you figure out your 1st problem w/altering the time for when a thread
> > switch will happen? I take it from your comments that you have a
> hardcoded
> > solution for that but you are trying to figure out a general solution to
> it.
> >
> > Is that you question or are you now elaborating on a different problem?
> From
> > your text, it's a little confusing on if you solved the 1st problem and
> are
> > working on a 2nd  or are considering ways for the 1st problem.
> >
> > Lastly, I know some of the memory system code has been refactored
> (Andreas,
> > Ali?), but as far as I can remember the CacheUnit/CPU has a instruction
> and
> > data port attached to it. You can traverse one of those port until you
> get
> > to the Cache and then ask for the latency.
> > But again, I think the ports are changing so I need one of the memory
> system
> > guys to follow up on that.
> >
> > I'm not sure of a cleaner way to do that but you could also create your
> own
> > parameter to pass into the CPU, that waits X cycles before declaring a
> cache
> > miss.
> >
> > On Sun, Jul 22, 2012 at 10:29 AM, Yuval H. Nacson
> > <[email protected]> wrote:
> >> Hello all,
> >>
> >>
> >>
> >> I'm a relatively new user in gem5, but I already have my share of
> >> experience.
> >>
> >> I attached a conversation I had with Korey Sewell (which suggested me
> >> to move to this mailing-list) on the users list about the Switch on
> >> cache hit switching thread every memory access if the cache latency is
> >> longer than a cpu cycle.
> >>
> >> There's an example in the attached conversation.
> >>
> >>
> >>
> >> Now, What I want to do is to create a delay whenever a thread reaches
> >> for memory. For that I want to postpone the tick event of the cpu.
> >> Alas, I've yet to figure out how to reach the cache latency and CPU
> >> cycle time from within the code.
> >>
> >>
> >>
> >> Any suggestions or thought about my proposed solution?
> >>
> >>
> >>
> >> Thanks,
> >>
> >> Yuval
> >>
> >>
> >>
> >>
> >>
> >> Korey:
> >>
> >> Hi Yuval,
> >>
> >> Unfortunately, this isnt something you can switch in the config file
> >> (although it should be), so you'll need to edit some code to get this
> >> working.
> >>
> >>
> >>
> >> First, there is some code in pipeline_stage.cc that says asks an
> >> instruction if it's stalled on memory or not. I'm guessing
> >> "inst->isMemStall()". This function needs not to return true after only
> 1
> > cycle.
> >>
> >>
> >>
> >> Second, that function is set to true in cache_unit.cc. In the execute
> >> function, you'll probably see a "inst->setMemStall()" in there if it
> >> tries to complete the access and the data isnt available yet.
> >>
> >>
> >>
> >> What you'll need to do is edit that code in cache_unit.cc to just wait
> >> the necessary amount of cycles before declaring a memory stall. Please
> >> move the conversation to gem5-dev if you want to go the route of
> >> submitting the patch to make the "MemStall" general instead of 1 cycle.
> >>
> >>
> >>
> >> On Mon, Jul 16, 2012 at 10:07 AM, Yuval H. Nacson
> >> <[email protected]> wrote:
> >>
> >>> Hey,
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> In my quest of inorder pipeline I've encountered another problem.
> >>
> >>>
> >>
> >>> I run two threads on a switch on cache miss machine. Also. as opposed
> >>
> >>> to other question I've sent so far, I'm using the 5 stages pipeline
> >>
> >> supplied
> >>
> >>> with the simulator.
> >>
> >>>
> >>
> >>> The Frequency is 2Ghz and I set the cache latency to be 800ps (which
> >>
> >>> is
> >>
> >> more
> >>
> >>> than a clock cycle).
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> Now. In the fourth stage memory request is being sent and in the
> >>> fifth
> >>
> >> stage
> >>
> >>> the cache is being read.
> >>
> >>>
> >>
> >>> I get the following:
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: [tid:0]: Not blocked, so attempting to
> >>
> >>> run stage.
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: [tid:0]: Processing instruction
> >>
> >>> [sn:41180]
> >>
> >> ldq
> >>
> >>> with PC (0x1200003f0=>0x1200003f4)
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: [tid:0]: [sn:41180]: sending request to
> >>
> >>> system.cpu.dcache_port.
> >>
> >>>
> >>
> >>> 159888500: system.cpu.dcache_port: [tid:0]: [sn:41180]: Updating the
> >>
> >> command
> >>
> >>> for this instruction
> >>
> >>>
> >>
> >>> 159888500: system.cpu.dcache_port: [tid:0]: [sn:41180]: Trying to
> >>
> >> Complete
> >>
> >>> Data Read Access
> >>
> >>>
> >>
> >>> 159888500: system.cpu.dcache_port: STALL: [tid:0]: Data miss from
> >>
> >>> 0x12009ed78
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: [tid:0]: [sn:41180] request to
> >>
> >>> system.cpu.dcache_port failed.
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: [tid:0] [sn:41180] Detected cache miss.
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: Inserting [tid:0][sn:41180] into switch
> >>
> >> out
> >>
> >>> buffer.
> >>
> >>>
> >>
> >>> 159888500: system.cpu: Scheduling CPU Event (SquashFromMemStall) for
> >>
> >> cycle
> >>
> >>> 159888500, [tid:0].
> >>
> >>>
> >>
> >>> 159888500: system.cpu.ResourcePool: Ignoring Unrecognized CPU Event
> >>
> >>> (SquashFromMemStall).
> >>
> >>>
> >>
> >>> 159888500: system.cpu.stage4: Suspending [tid:0] due to cache miss.
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> .
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> 159888800: system.cpu.dcache_port: [tid:0]: [sn:41180]: [slot:4]
> >>
> >>> Waking
> >>
> >> from
> >>
> >>> cache access (vaddr.0x12009ed78, paddr:0x090d78)
> >>
> >>>
> >>
> >>> 159888800: system.cpu.dcache_port: [tid:0]: [sn:41180]: Processing
> >>
> >>> cache access
> >>
> >>>
> >>
> >>> 159888800: system.cpu.dcache_port: [tid:0]: [sn:41180]: Bytes loaded
> >>
> >> were:
> >>
> >>> 0000000000000000
> >>
> >>>
> >>
> >>> 159888800: system.cpu.dcache_port: [tid:0] Waking up from Cache Miss.
> >>
> >>>
> >>
> >>> 159888800: system.cpu: [tid:0]: Activating ...
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> Note the Time differences.
> >>
> >>>
> >>
> >>> I would like to have my cache to have a latency of 2-3 cycles and not
> >>
> >> having
> >>
> >>> the thread switched. Can it be done?
> >>
> >>>
> >>
> >>>
> >>
> >>>
> >>
> >>> Thanks,
> >>
> >>>
> >>
> >>> Yuval
> >>
> >>>
> >>
> >>>
> >>
> >>> _______________________________________________
> >>
> >>> gem5-users mailing list
> >>
> >>> [email protected]
> >>
> >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>
> >> - Korey
> >>
> >> _______________________________________________
> >>
> >> gem5-users mailing list
> >>
> >> [email protected]
> >>
> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >>
> >>
> >>
> >> _______________________________________________
> >> gem5-dev mailing list
> >> [email protected]
> >> http://m5sim.org/mailman/listinfo/gem5-dev
> >
> >
> >
> > --
> > - Korey
> > _______________________________________________
> > gem5-dev mailing list
> > [email protected]
> > http://m5sim.org/mailman/listinfo/gem5-dev
> >
> > _______________________________________________
> > gem5-dev mailing list
> > [email protected]
> > http://m5sim.org/mailman/listinfo/gem5-dev
>
>
>
> --
> - Korey
>
>
> ------------------------------
>
> _______________________________________________
> gem5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/gem5-dev
>
>
> End of gem5-dev Digest, Vol 63, Issue 63
> ****************************************
> _______________________________________________
> gem5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/gem5-dev
>
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to