Ok. I was thinking along those lines as well. In this case the CPU is effectively leaking stack space which seems like a bug, although it's a slow leak that recovers and isn't likely to go far enough to cause a visible problem.
Gabe Quoting Steve Reinhardt <[email protected]>: > You're right that the simple timing CPU is basically driven forward by the > responses to ifetch requests. The original idea (that I don't think has > changed) is to be the simplest, basically functional-only CPU that you can > have that is capable of driving the memory system in timing mode. So having > all the microops of a single instruction execute with no delay isn't > necessarily a problem. Particularly with Korey's new in-order model, > there's no justification for adding any complexity to the simple model just > to make timing more realistic. I think a code reorganization is useful only > if it simplifies the code in some fashion (fewer lines of code, less > convoluted control flow). > > Steve > > On Mon, May 4, 2009 at 12:38 AM, Gabe Black <[email protected]> wrote: > >> It looks like the timing simple CPU once relied on fetches as a way >> to roughly approximate the CPU doing a certain amount of work per cycle. >> With microops that breaks down a bit because no further fetching is >> needed for a potentially large group of microops. As a result, assuming >> they don't access memory themselves, those microops will all execute one >> right after the other with no delay. Is there some system that I'm >> missing that takes care of that? >> >> Also, it looks like the way the code is structured currently, the >> point one microop ends will start the next one almost recursively, >> building a deeper and deeper call stack. This wasn't originally the >> case, but without having to send off a fetch there's no event to delay >> the call until the call stack collapses. This seems like yet another >> reason to try to reorganize the code to have a wide rather than deep >> call tree. >> >> Gabe >> _______________________________________________ >> m5-dev mailing list >> [email protected] >> http://m5sim.org/mailman/listinfo/m5-dev >> > _______________________________________________ m5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/m5-dev
