Is "the shortest possible execution time" even a meaningful concept? My
understanding is that at least one possible answer is zero. That is, it may
be possible that you have a sequence of instructions A, B, C, and that I
could introduce one more instruction between A and B without changing the
length of time it takes the sequence to execute -- because it simply "fills
up" part of an inevitable delay before executing B or C -- so the answer to
your question is effectively zero.

OTOH, as I said and EJ confirmed, "how much longer a fetch takes if not in
L1 cache" is "one HECK of a lot longer." As I said my new mental model is
"instructions take essentially no time; (uncached) data fetches take a real
long time." I think the concept of "how long an instruction takes" is
essentially an outmoded way of thinking about the problem. Instructions
don't take time; data fetches do.

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On
Behalf Of Mike Schwab
Sent: Tuesday, February 04, 2014 11:11 AM
To: [email protected]
Subject: Re: CPU time

I think a table of the shortest possible execution times for an instruction
would be useful, how many operands it uses, and at the end a list of how
much longer a fetch takes if an operand is not stored in the fastest level
of cache.

On Tue, Feb 4, 2014 at 10:01 AM, John Gilmore <[email protected]> wrote:
> I of course agree that "much work remains to be done"; but I am 
> hopeful that instruction-execution counts will in time come to 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to