OK. So now that I have the "stranded string" implementation sketched, I want
to explain why **BitC should take a position on string operations, but not
on string representation**.

String representation choices interact with many other issues:

GC design: Something like a stranded string is *required* for real-time
concurrent incremental GC -- though not for any reason having to do with
code points.

Space/time tradeoffs: In deeply embedded implementations, considerations of
implementation simplicity and representation size are likely to outweigh
considerations of performance.

Pre-existing runtimes: In .Net, we don't have any real choice about the
underlying native string implementation.

The BitC specification can provide a default implementation using (e.g.)
stranded strings, but it should not preclude implementations that must
operate under external constraints such as the ones given above.

My personal opinion is that stranded strings are probably the best way to go
in implementations that are not externally constrained, but even "native"
implementations of BitC may be constrained in practice by the choices of
(e.g.) various libraries written in C.

I think the discussion of non-stranded strings omits an important design
option worth considering: the stranded string design.

But more importantly, I think it's important to remember that this is an
area where (a) we can provide a preferred implementation, and (b) we can
require that implementations come with a "profile" that describes the
performance characteristics of their implementation, but (c) we can't
dictate the answer adopted by a given implementation without abandoning
interop and/or certain GC options.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to