So what I'm saying here is that the cost of multiple GCs is low, but the
cost of multiple static compilation strategies is high.

 

I agree with these statement. That said Bitc will have multiple  standard
libs 

-          you will need to  support a kernel lib  ( threaded ? , reentrant
?) . In my case processes have a single user thread why pay the price for a
concurrent lib ?

-          You need to support libs for arm which may be a major market as
it is more c dependent  and changing .

I do not believe that this is correct. Yes, there will be multiple libraries
in the sense that there will be multiple different "DLL"-like things. That's
not what I was referring to. What I was referring to is the need for
multiple **compilation modles**.

 

 

How is the cost of this different  than just compiling with a different lib
or dll ( for the compiler and the target)  ? Isnt the main cost of multiple
compilation modules the cost of managing 2 libs which are insync with client
?.

 

 

Concerning the cost of concurrency on non-concurrent applications, I think
this is a non-issue for the following reasons:

 

- To first order, *all* processors in the world - including embedded
processors - are now multicore. In practice, this means that non-concurrent
code is (perhaps sadly) going away. This may not be true for deeply embedded
processors, but we should view those as a distinct target.

- Similarly, nearly all processors are now multi-issue.

 

 

In both cases they don't cover the issue where processes are single user
thread ( with maybe the GC on a separate thread) and you run lots of light
weight processes on different processers.   I know its unusual etc but it's
a personal interest which is Off Topic..

 

 

I also think that it's an option to consider nullifying the lock operations
(by NOPs) at library load time, leaving only one on-store image of the
library.

 

 

Good solution a load time patch would be good if a bit yucky.

 

 

 

Given truly crappy code generation (which has been historically typical on
ARM) targeting a pre-cortex-A8 processor, I agree. For Cortex-A8 and later,
I strongly doubt it.

 

 

Yes Cortex A8 are pretty high end.. In 5 years they will be the bottom
though.

 

 

... Meaning super scalar is either dangerous telling the GC it requires the
reference after the work is done or  requires synchronization  which  again
can be expensive for lots of little things.

 

Umm. Since correct superscalar implementations preserve sequential
semantics, this is untrue. The bigger problem on ARM is the crappy memory
model defined by the architecture.

 

 

Yes via stalling  , putting the time for this code back to scalar , but out
of order read access is becoming more common and not just for compilers. yes
arm has memory issues.

 

 

Maybe I'm just not brave enough I like small steps and keeping it simple.
But I would have though if its easy there would be more Java or C#
concurrent collectors on the market...

 

There hasn't really been a compelling reason to do this work. Bytecode based
code has been viewed as being not-time-critical, though this view is
changing.

 

 

Agree its not needed for 95% of code which I stated earlier but it is needed
eg in NET WCF  on project I did run 10K messages a second and do a store and
forward with stored up to hours . End results 4 Gig memory heap ( no big
deal) , 30 second pauses ( big deal) . Quick solution was to static manage
the stored message for each store in C.

 

 

In fact if you write such a collector im sure you can sell it to M$.

 

I'm pretty sure they wouldn't, since Bjarne Steensgaard has already built
and published the results from one. The big issue for MSFT has (had?) been
that their support for ARM in general was weak, and they didn't have a
compiler that targeted Thumb code (the 32-bit-wide ARM instruction set is
going to be deprecated). Couple that with the fact that ARM has been very
slow with their 64-bit story, and you have a lot of issues all in the air at
the same time - more than enough to cause any sensible development
organization to adopt a "go slow and see" approach. ARM's entire 64-bit
story is in a serious state of disarray, and all of this stuff is something
that you *really* don't want have to do twice!

 

 

 

Yes I read the paper last year and rescanned , its nice how they modified
the MSIL  VM instruction for indirect stack changes , pity there was no
overall benchmark.  Maybe your right and these collectors will soon be more
common but I still think a gen Mark Sweep will be best for 95% of apps  (
faster on most amateur "language shootout" benchmarks which are important
for opinion) . Though obviously if the difference is less than a few percent
you would just use the pause less one .  Maybe when Bitc is released have a
fast mark sweep win over the crowd ( which are not so interested in pause
less) then update later. 

 

The answer is probably since people are not asking for it MS isn't providing
it resources and that makes C# less useful for things that need it , like 3D
graphics and drivers a , bit of a chicken and the egg .

Re MS why would they care  about pauseless a collector for ARM on.NET ? The
Compact Framework CLR has much bigger issues .than a pauseless GC ( which
shouldn't be an issue on much smaller heaps). The beauty of such a collector
on a VM is the fact a VM can recompile everything including standard libss
and GC hooks to a new GC if needed. This may even happen now when you select
the new concurrent GC.

 

 

 

Ben

_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to