Benny Amorsen wrote:
> Matthew Fredrickson <[EMAIL PROTECTED]> writes:
> 
>> Actually, with the way caching is done on nearly all modern processors, 
>> it is debatable whether or not a look up table is the optimal way to do 
>> the conversion, at least on such a simple codec such as ulaw or alaw. 
>> In fact, the amount of time it takes to fetch memory from a cache miss 
>> can easily ruin the single element lookup performance in a look up 
>> table.
> 
> If the compiler is clever enough, you can embed a small lookup table
> in the instruction stream. Instruction prefecting will automatically
> ensure the page is in I-cache, and even on most processors which can't
> read from I-cache the table will be in 2nd-level cache.
> 
> Low-level optimizations like these are often dependent on processor
> architecture though.

This is very true.  Mostly wanted to make sure that people knew that 
cache miss penalties can be more of a slow down (and in fact will be for 
a simple thing like a lin-to-mu) on a big, multi page table like in a 
lin-to-mu lookup than simply executing the instructions from I-cache 
(which are much more likely to not cause a miss due to the small number 
of instructions involved).

Matthew Fredrickson
Digium, Inc.

_______________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --

asterisk-users mailing list
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-users

Reply via email to