On Thu, Oct 17, 2013 at 1:55 PM, Jonathan S. Shapiro <[email protected]>wrote:

> Yes, I agree that it is a cost triggered by an in-band action (releasing a
> reference). The problem is that the magnitude (if you prefer: cost
> variance) of that cost is highly unpredictable. If you are releasing a leaf
> object, it's a pretty low cost. If you are releasing the root of an object
> graph, you could be pausing for a time that is bounded only by the size of
> the heap. *Which makes it just as bad as halt-the-world GC*.
>

I strongly don't agree that it's as bad as halt-the-world GC.

The issue is whether it's *possible* to control it or not. Programmers
writing code can understand when reference-release may drop a large amount
of stuff, just like calling a manual freeTheWorld() function, and can
change the code to do it at a time where it's not perceived. The same can
not be said of GC world-stop, which can unpredictably bite any code anytime
any other function in any other thread allocates.

I'm not against deferred dec-ref. Sounds fine. However, it's a
nice-to-have, because I can implement it manually if I have normal RC by
making a "deferredDeref()" function which holds onto a reference until a
good time to let go. In threadsafe RC, another thread can do it without
affecting the thread that decided to release the big pile of stuff at all.


> GC tracing is a hidden cost, because in between two instructions which
>> don't allocate or deallocate you can lose alot of time to the GC.
>>
>
> That's not the case in *any* GC I know about. The rendesvous/syncpoints
> are either explicit instructions or associated with allocations.
>

I'm not aware of any threaded GC system which provides any guarantees about
where pauses occur WRT source-lines. JVM and CLR certainly do not. Any
allocation in any thread can trigger a heap trace, which will stop all
threads unpredictably in-between arbitrary source lines.

How do you classify iOS and Python's reference counting on the spectrum of
>> "don't make sense" vs "apply some care"? Both of them are used for many
>> many very successful applications. (with quite low end-user perceived
>> latencies)
>>
>
> I'd classify both of them as sucking irredeemably. In both cases, other
> factors were deemed to be MUCH more important than performance, and both
> systems optimize for the criteria they consider important.
>

My only conclusion from this is that I don't share your definition of
"performance", and I don't share your definition of "making sense".  :)

Languages have no value onto-themselves. They are tools to make useful
applications. If really awesome useful applications can be written, and
meet important performance goalsthen it's hard for me to say it's "sucking
irredeemably". I find it a strange perspective to prioritize
microbenchmarks like single-threaded mutator performance, rather than how
the runtime facilitates overall application performance goals and SLAs.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to