>
>
>> Microsoft singularity is quite interesting research in this direction, as
>> it is attempting to eclipse the C-shared-library.
>>
>
> The "whole program" idea in Singularity rests on a claim that David
> Tarditi made about a process he called "tree shaking". Tree shaking, in
> brief, is a process for removing unreachable code from programs. David
> claimed, without supporting data or careful study, that the binary size
> reduction obtained from tree shaking was greater than the size reduction
> obtained from shared libraries.
>
> There are two problems with this claim:
>
>    1. It disregards the fact that the two optimizations are orthogonal.
>    The ability to remove unreached code does not reduce the value of gathering
>    *reused* code in a common place.
>    2. The metric of interest isn't the size reduction in a single
>    program, but the total code footprint across the system as a whole (that
>    is: across *many* programs). The tree shaking approach results in a
>    system where *most* programs will duplicate a certain body of code
>    that is commonly used. That's the code that belongs in shared libraries.
>
>
Re 1. In Singularities case its IPC was clearly geared to the concept of
many  async services taking some of the rolls of Dlls  and hence
encouraging re-use at the service level more like Minix/micro kernels.  ( I
mean singulararity didnt even have DLLs in the public release and it was a
long way from supporting them)

Re 2. If you  have  a single large static  system runtime like .NET   the
remaining shared  code becomes small... This remaining code  is often very
domain specific and hence well  placed in services.  In many cases for a
Singularity type system/ runtime  this would reduce the overall system
footprint significantly .


Ben
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to