I must say the idea is growing on me since it's an abstraction layer , the
strand reference can be so say a simple  char UCS-1 strand with no 2nd
reference tree for small strings and we can intern small strings heavily
when using a GC. Are you considering a similar hack to .NET where the root
char[] is internal  and not a separate object ?  The fact you suggested
flags on the array seems to suggest it rather than typeof() this is worth it
as it means your only creating 1 object instead of 2 . 

 

However wont you need some API for legacy C char[] string code anyway
especially where the code may not have or want a heap or GC. Java and C#
don't have this but it is probably one of the reason its hard to use  for
embedded/systems programming witness all the noHeap stuff and string hacks
in Singularity. 

 

Regards, 

 

Ben

 

 

 

From: [email protected] [mailto:[email protected]] On
Behalf Of Jonathan S. Shapiro
Sent: Friday, October 15, 2010 5:57 PM
To: [email protected]; Discussions about the BitC language
Subject: Re: [bitc-dev] Unicode and bitc

 

On Fri, Oct 15, 2010 at 12:44 AM, Ben Kloosterman <[email protected]>
wrote:


I think small string and big string separation may be better


I think that small string and big string are too complicated.

Let's see what the perf implications of stranded strings actually turn out
to be.


shap

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.862 / Virus Database: 271.1.1/3183 - Release Date: 10/15/10
02:34:00

_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to