Filipe Brandenburger <[EMAIL PROTECTED]> writes:
> 
>But, back to the efficiency issue, I _THINK_ the scenario I described is not 
>inefficient. What it does differently from a monolithic system: it uses 
>callbacks instead of fixed function calls, and it doesn't inline the 
>functions. First, Callbacks take at most 1 cycle more than fixed function 
>calls (is this right???), 

No - a memory fetch can take a long time (10s of cycles).
Mostly that can be hidden by a pipeline, but branches (i.e. calls)
tend to expose it more. But we are already thinking of "vtables" which 
are no better.

>because the processor must fetch the code address 
>from an address of memory, instead of just branching to a fixed memory 
>address. Comparing to all the code Perl uses to handle SVs and such stuff, I 
>think 1 cycle wouldn't kill us at all! 
> 
>Well, inline functions _CAN_ make a difference if there are many calls to 
>one function inside a loop, or something like this. And this _CAN_ be a 
>bottleneck. 

Inline functions can also cost you - the out-of-line function 
may be in the cache, and the plethora of inline functions not in cache,
or extra code size thrashes cache.

>Well, I have one idea that keeps our design modular, breaks 
>dependencies between subsystems (like that of using async i/o system without 
>having to link to the whole thing), and achieves efficiency through inline 
>functions. We could develop a tool that works in the source code level and 
>does the inlining of functions for us. I mean a perl program that opens the 
>C/C++ source of the kernel, looks for pre-defined functions that should be 
>inlined, and outputs processed C/C++ in ``spaghetti-style'', very messy, 
>very human-unreadable, and very efficient. 

And already discussed ;-) 

-- 
Nick Ing-Simmons <[EMAIL PROTECTED]>
Via, but not speaking for: Texas Instruments Ltd.

Reply via email to