Joel C. Ewing wrote:
My impression of the PC clock is that it was never intended for any purpose other than maintaining wall clock time, and as such has appropriately low resolution. My other impression is that there is some Operating system involvement in maintaining its value on the PC: that the clock may run "slow" on a system that is totally maxed out or unable to service interrupts for an extended period. On the original S/360 introduced in 1964, I believe a System Timer, which also required periodic interrupt servicing, was used to track clock time and it had similar drift problems. I believe the hardware TOD clock was introduced with the following IBM S/370 architecture, so it has been around for at least 30 years.

360 clock was full word at location 80 in storage. low-end 360s tic'ed about at approx. 3mills. high-end 360s had high-resolution timer feature that tic approx. 13micros. the period of the clock was the same, high resolution clock tic'ed the lowest bit ... lower-resolution clock tic'ed one of the other bits.

the clock making transition would generate an external interrupt ... so could be used for things like time-slices (cp67 used it for maintaining wall-clock, interval timing, and time-slices).

370 went to register paradigm ... and three separate constructs, the tod clock, the clock comparator (interrupt when tod clock passed certain time), and timer ... which could be used for time-slices and/or cpu useage accounting. this lowered the software burden ... trying to make a single time construct serve multiple different purposes.

the other issue for the change was main memory contention ... since the 360 timer hardware had to get the memory bus to update location 80.

i had done some amount of work on cp67 as an undergraduate in the 60s, redoing the terminal support including adding ascii/tty terminal support and was trying to make the 2702 terminal controller do some stuff that it couldn't quite do (i.e. both automatic terminal recognition as well as automatic baud rate). somewhat as a result, the univ. started a project to build a clone controller (someplace there was an article that blame us for the pcm controller business)
http://www.garlic.com/~lynn/subtopic.html#360pcm

the 360 channel interface was reverse engineered, a channel interface card was built and put in an Interdata/3 minicomputer programmed to emulate 2702 functions. one of the early debugging problems was attacking to the 360/67 channel and "red lighting" the 360/67 ... stopping the processor. Turns out that the high resolution timer tics and schedules an update of location 80 timer value, if the timer tics again before the previous update has happened, it treats it as a hardware error and stops the machine. It turns out that the channel interface card had signaled the channel to obtain the memory bus for transfer ... and needed to periodic interrupt the transfer to allow other access (like the timer) to the memory bus.

As to timers in other processors ... we were doing high-speed networking in the 80s
http://www.garlic.com/~lynn/subnetwork.html#hsdt

and were doing various things ... like I added RFC 1044 support to the standard mainframe tcp/ip product. at the time, the standard product would get about 44kbytes/sec aggregate sustained thruput with high pathlength overhead (full 3090 processor at 44kbytes/sec). In some tuning of RFC 1044 support at cray research ... between a cray and a 4341-clone ... we were getting 1mbyte/sec aggregate sustained using only a fraction of the 4341 processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

we also had deployed our own high-speed backbone ... although when NSFNET backbone RFP was published (operational precursor to current internet), we weren't actually allowed to bid (although we were allowed to act as the technology redteam with alternative technology proposal). However, we did get a technology audit from NSF that concluded what we already had operational was at least five years ahead of all bid submissions for NSFNET backbone RFP. minor ref:
http://www.garlic.com/~lynn/internet.html#0
http://www.garlic.com/~lynn/2006e.html#38
http://www.garlic.com/~lynn/2006f.html#12

one of the things we had done in our backbone implementation and deployment was rate-based pacing ... which we claimed was significant improvement over the windowing-based stuff for congestion control.

in fact, the month that the slow-start talk (windowing-based congestion control) was given at the IETF meeting ... the annual sigcomm proceedings had a paper how windowing-based congestion control was none stable in larger heterogeneous networks (for a number of reasons). The issue was that rate-based pacing required the availability of reasonable set of timer-functions (in order to implement "rate" construct ... i.e. activity per unit time).

A lot of the tcp/ip implementations in the period were on platforms that lacked any reasonable set of timer features ... and so therefor it was somewhat necessary for them to do a purely event-based paradigm ... rather being able to implement a time-based paradigm.

of course, it helped doing rate-based stuff with also having been heavily involved with dynamic adaptive resource management as an undergraduate in the 60s ...
http://www.garlic.com/~lynn/subtopic.html#fairshare

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to