I'm working on some code that tries to detect a loop in a subtask by examining 
the LCCAWTIM value for each CPU. My detector task runs in an STC that wakes up 
every .5 seconds and trolls through the LCCA Vector calculating deltas of wait 
time between iterations. 

The routine that retrieves the LCCAWTIM runs on a half second timer. So I would 
expect that the wait time delta for any CPs to be less than .5 seconds. But 
deltas between the runs shows a value that significantly exceeds .5 seconds (or 
500 microseconds).  
For example: 

On a pass through the routine, we have the LCCAWTIM values of 

Iteration 1 - 000e3134  9bba4d26 
Iteration 2 - 000e3134  bfb14032 - Delta - 00000000  23f6f30c - 
Iteration 3 - 000e3134  d1ed1c5a - Delta  - 00000000  123bdc28

According to the documentation LCCAWTIM is a double word value where bit 51 
represents 1 microsecond. To convert to microseconds, we lop off the lower 3 
nibbles (bits 52-63), now .5 seconds is only x'1f4' microseconds so I would 
expect all the deltas to be x'1f4000' or less. 

Now the first delta converted to microseconds yields 23f6f or 147311 or 147 
seconds which is considerably longer than the 500 microseconds. The second 
delta is better at 123bd (74685) but still at 74 seconds is way longer than the 
.5 seconds that the timer is cycling. 
What is interesting is that a test run that uses multiple tasks/jobs I do get 
zero and near zero deltas between iterations. 
 
What am I missing here?   

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to