On 09/04/2012 1:28 PM, Jonathan S. Shapiro wrote:
> You can't point all you want, but it doesn't validate your argument. The
> existing open source ecosystem relies heavily on dynamic libraries,
> which means that source code is /not/ known at compile time in general.
> This is a technical issue, not a political one. The use of closed vs.
> open source doesn't alter the conditions under which compilation occurs
> in any particular programming language.

Agreed, but that's not the point. The open source ecosystem disproves
your proposition that the set of realistic programs that can be whole
program compiled is empty. Whether open source currently uses dynamic
linking is irrelevant, because these programs do not depend upon it;
except those that explicitly load dynamic modules as part of their
purpose of course, but these programs don't depend on specialization for
these loaded modules, so some abstraction overhead is acceptable for
this case.

A Cabal/CPAN/Gems-like repository for source is the viable technical
answer to the technical challenge you raise above, given your stated
goal for BitC was not to do research. I am merely providing an avenue to
achieve a product using established methods that require no innovation.

> Huh? I don't think you are thinking this through. I write a library
> function:
> 
>    def faux-add(a, b) = { a + b }
> 
> which is typed as
> 
>   Arith 'a => 'a x 'a -> 'a
> 
> and I put that in a library. I basically can't compile this function at
> all until I know the instance of Arith, which isn't known until dynamic
> compile time.
> 
> This means that I need the entire optimizer built into the dynamic
> linker.

Or you need the source for whole program compilation, which is what I
suggested. I meant that in practice, the library source will always be
available so at least prelude types can be specialized (and your program
types of course).

You can then support dynamic linking as a case that does not benefit
from specialization, and suffers an inherent performance penalty when
using certain abstraction mechanisms. It'd be nice to have a better
answer, but you yourself acknowledged that we don't have a good theory
for this, so you either have to do the research you don't want to do, or
you have to accept a compromise like I suggested, or you have to give up
on some abstraction.

On Mon, Apr 9, 2012 at 5:54 PM, David Jeske <[email protected]> wrote:
> In real systems we expect to be able to respond to user-actions
> 10-20ms. This is not possible to do reliably with today's GC systems.

I disagree. Referencing counting collectors are inherently incremental.
Bacon et al's "An Efficient On-the-Fly Cycle Collection" is a concurrent
ref counting GC that exhibits pause times less than 3ms for Java
programs. It incurs some overhead, but it's still competitive with
tracing GC considering the extremely low latency.

The largest cost is the root scan of the stack. If you take that GC, and
compile it to a language that executes in CPS form, the pause time will
be in the hundreds of nanoseconds, at worst. CPS form costs you further
overhead, but if your goal is truly low latency and hard realtime, it's
possible, but nothing is free.

Sandro

_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to