> I did not observe such overheating.
That was probably because the CPUs at that time had no consumption high
enough that continuous polling was a problem.
> What if anything has changed since that time?
Are you referring to libraries, the CPU or something entirely else with
> What mechanism of system board design would cause such behavior? I want
> know the how of how can a system board overheat and need a software
> intervention. Can you offer any online links that discuss design issues
> related to this overheating.
I don't know any off hand, but the cause of this is really not that
difficult to understand. There's (for example) the BIOS service Interrupt
16h, function 00h: "Get keystroke". It is defined to return only after a
key has been pressed, or immediately if there is a key stored in the
BIOS's keyboard input buffer already. The keyboard input buffer is defined
to be somewhere, and the hardware interrupt (usually IRQ1, or Interrupt
09h) which is invoked on each physical keystroke appends the keystroke
that was pressed to this buffer. (This is a simplified description, but
the specifics don't matter here.) The BIOS's (pseudo-)code to service
Interrupt 16h, function 00h might look like this:
if there's at least one keystroke in the buffer
return the first keystroke from the buffer
go to beginning and check again
(Technically, the interrupt flag will be insured to be on during this loop
so that the IRQ can execute while the BIOS waits for it.) As you may
understand, this means that most of the time (whenever waiting for
keyboard input) the CPU is caught in looping the few instructions that
check for whether a keystroke has arrived. This causes the CPU to be fully
utilized all the time, for maximum power consumption and heat development.
Idling programs basically use the same loop, but there's one more thing
if there's at least one keystroke available
use the first keystroke available
put the CPU temporarily into power-saving mode
go to beginning and check again
This additional operation makes the CPU save power (usually by going in a
sleep state until the next hardware interrupt comes along) but does not
decrease performance in any way. It only costs a few byte to implement -
theoretically, this should be in the BIOS and/or DOS, but because it
mostly is not I usually put it in my own applications too. (This doesn't
conflict with possible implementations in the BIOS and/or DOS.) However,
you gain the advantage of lower heat development and power consumption.
The former means that components last longer, while the latter directly
costs you less money. I think FDAPM is able to somehow calculate how much
"CPU time" was spent in its idling (i.e. would have uselessly wasted
energy otherwise) although I don't know whether this is very accurate, and
how accurate it is if using idling applications. Notably, the idle time is
usually >95% on any recent CPU (with non-idling applications).
I recommend not to run any modern system without FDAPM or idling
applications. The CPU probably won't overheat (or will switch itself
off/lower its consumption in case that happens) but might still reach core
temperatures in excess of 70°C. This makes it wear out faster and probably
isn't good for other components of the machine either.
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
Freedos-user mailing list