I agree that it is more important to write reliable, understandable, and maintainable code, and programmer time should not be wasted trying to squeeze out that last microsecond of performance. However, knowing the general relative performance of instructions could be helpful when designing the particulars of highly used algorithm. For example, if I need to know if an integer is multiple of 1024, I could test the lower 10 bits for zero or I could divide by 1024 and check for a zero remainder. Both methods are could be considered reliable, understandable, and maintainable. For infrequently used code, it probably does not matter which method was used, but for code that will be used millions or billions of times in could make a big difference.
---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

