In all seriousness, lower=faster is no more a restricting factor than higher=faster. In either methodology you're going to run out of bits a nd whether it's at the high end (overflow) or the low end (underflow) is irrelevant. When the condition approaches you either need to add more
bits (to the left or right as appropriate), adjust the range of values generated, or switch to a floating point representation (which only postpones using the first two options). The main disadvantage of the current lower=faster using integers is the increasing rate of loss of precision as zero is approached due to truncating/rounding of data. Brian Nielsen On Wed, 2 May 2007 12:50:21 -0400, Alan Altmark <[EMAIL PROTECTED]> wrote: >On Wednesday, 05/02/2007 at 09:02 MST, "Schuh, Richard" <[EMAIL PROTECTED] > >wrote: >> However, zero will be a limit --- and you need to multiply by 50% >> (expressed as .5) to divide by 2. If you divide by .5, the result will >> be an ever increasing value. (10 / .5 = 100 / 5 = 20) > >Now you guys cut that out! You know what I meant. Eventually everythin g >turns to a value of 1 since you would never willingly round a capacity >number *down*. A specialty engine could have 100 times the capacity of a >CP, yet both would show a '1' for a sufficiently powerful CP. > >Alan Altmark >z/VM Development >IBM Endicott >======================== ========================= ========================
