Hi Mark,
  If you're referring to our work, gem5-gpu (
https://gem5-gpu.cs.wisc.edu/wiki/), we do support the various flavors of
gem5-style fast-forwarding/checkpointing and cache warmup.  Currently, we
are limited in a few ways: First, gem5-gpu can only fast-forward to or
checkpoint before the first stream operation (e.g. cudaMemcpy or kernel
launch) is scheduled in the stream manager.  This restriction could be
lifted by checkpointing the stream manager, which should be fairly simple
and is among our top interests currently.  Second, the simulator also does
not currently support fast-forward/checkpointing of the GPU cores/shared
memory.  This would require a fair amount of modification to GPGPU-Sim.
 Regarding warm-up, since gem5-gpu uses Ruby cache hierarchies, the caches
can be warmed up after a checkpoint restore similar to the MOESI_hammer
Ruby protocol.  Again, we'd need to support checkpointing between GPU
kernels to run something on the GPU before a checkpoint could be created to
warm the GPU caches.

  Joel


On Wed, Mar 20, 2013 at 1:51 PM, Wilkening, Mark <[email protected]>wrote:

> Hello,
>
> I am wondering if the GPU model supports any form of fast-forwarding or
> warm-up. Could this be implemented in the python in the same manner as the
> CPU only model or is there something that would make support in the GPU
> model more complicated?
>
> Thanks.
> _______________________________________________
> gem5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/gem5-dev
>



-- 
  Joel Hestness
  PhD Student, Computer Architecture
  Dept. of Computer Science, University of Wisconsin - Madison
  http://www.cs.utexas.edu/~hestness
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to