And THAT is why you should strongly consider STCKF or STCKE. STCK spins in microcode until it can come up with a time greater than any time it has previously reported, consuming CPU cycles that are charged to your program.
If all you need is the time of day, accurate to some very, very small increment, use STCKF. If you need monotonically increasing timestamps, re-program for STCKE. The precision of STCK is model-dependent, not architectural, and I am not certain that IBM documents it. Charles On Tue, 27 Feb 2024 16:46:00 -0600, Jon Perryman <[email protected]> wrote: >On Sat, 24 Feb 2024 19:50:32 +0000, Jim Mulder <[email protected]> wrote: > >>STCK, which inserts a processor related value in the low order bits to meet >>the "unique with a partition" requirement. > >If there's not a simple answer, I'll take your word this is monotonic, Out of >curiosity, what is the precision of STCK and how does it guarantee monotonic >time? In other words, how does STCK distinguish between 10 STCK's on the same >CPU in the same partition within 49 microseconds? Multiply 244 picoseconds >(TOD bit 63) by 200 CPU IDs, STCK precision is 49 microseconds. With a 5Ghz >Telum processor, single cycle instructions take 191 picoseconds which means a >single CPU can potentially execute 256 instructions during that 49 microsecond >timeframe. ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
