On Mon, Apr 9, 2012 at 3:32 AM, David Jeske <[email protected]> wrote:

> The reason I sought out BitC is that I'm looking for solutions not to
> program-expression, but to software modularity....
>

The two are not entirely separable. One requirement for modularity is that
the language does not enforce overspecialization through insufficient
expressiveness.


> Some observations...
>
> 2) type-inference doesn't fix this... type inference systems are merely a
> technique for software brevity
>

I don't entirely agree. Type inference is relevant because programmers are
notoriously bad at specifying "most general" types.


> 3) subtyping doesn't fix this.. it merely creates more challenges in both
> modular forward compatibility and performance subtyping across module
> boundaries
>

It may. But it is also an excellent basis for encapsulation of existential
type, which is very closely tied to modularity issues. On the other hand,
it's not the only candidate solution for that.


> 4) type-classes don't fix this... they are a PL or PL-implementation
> feature and are not solving a moduarlity problem
>

Also don't agree here. While it doesn't have to be type classes, having
some mechanism for expressing constraints on types is very useful for
modularity.


> IMO, if we want to move past C-shared-libraries, the solution we need is a
> model for type-safe high-performance loadable software components with
> upgradability (managable forward compatibility).
>

I'd certainly be very happy to see that, but I think we need to take one
step at a time. We still don't know how to do modularity in the *absence* of
versioning issues.


> Microsoft singularity is quite interesting research in this direction, as
> it is attempting to eclipse the C-shared-library.
>

The "whole program" idea in Singularity rests on a claim that David Tarditi
made about a process he called "tree shaking". Tree shaking, in brief, is a
process for removing unreachable code from programs. David claimed, without
supporting data or careful study, that the binary size reduction obtained
from tree shaking was greater than the size reduction obtained from shared
libraries.

There are two problems with this claim:

   1. It disregards the fact that the two optimizations are orthogonal. The
   ability to remove unreached code does not reduce the value of gathering *
   reused* code in a common place.
   2. The metric of interest isn't the size reduction in a single program,
   but the total code footprint across the system as a whole (that is: across
   *many* programs). The tree shaking approach results in a system where *
   most* programs will duplicate a certain body of code that is commonly
   used. That's the code that belongs in shared libraries.

Another way to say this is that what you really need is "whole system" tree
shaking rather than "whole program" tree shaking, and there are scaling
issues with that.

I believe that subsequent practical experience may have caused David to
revise his opinion on this.


> A3) software-virtual-machines provide some combination of features (JVM:
> a,b,e,f,h), (MSIL: a,b,c,d,e,f,h,i,j), but are still both missing a
> critical missing link to replace C-shared libraries... "e" (i.e.
> deterministic soft-real-time performance), making them unsuitable for
> layered subsystems. (because worst-case GC pauses are unacceptably large
> both in large-heaps and layered small-heaps)
>

I'm not sure why you say that for layered small heaps, and I'm fairly
convinced that it is wrong for large heaps *provided* concurrent collection
is used. Unfortunately, concurrent collector technology hasn't been widely
deployed.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to