On Tuesday, 17 June 2014 at 11:59:23 UTC, Manu via Digitalmars-d
wrote:
On 17 June 2014 18:36, Nick Sabalausky via Digitalmars-d
<[email protected]> wrote:
On 6/17/2014 2:56 AM, "Ola Fosheim Grøstad"
<[email protected]>" wrote:
On Tuesday, 17 June 2014 at 05:52:37 UTC, Nick Sabalausky
wrote:
Well, I think interesting part we're trying to look at here
is the
ARC's impact on speed.
ARC without deep whole program analysis is bound to be
slow.[...]
Right, but what I'm mainly curious about is "How much slower?"
Depending how
the numbers play out, then as Manu has mentioned, it could be
that the
relaxed memory requirements and amortized cost are enough to
make it a good
tradeoff for a lot of people (Like Manu, I have some interest
in
soft-realtime as well).
But I'm new to ARC, never even used ObjC, so I don't really
even have much
frame of reference or ballpark ideas here. So that's why I'm
interested in
the whole "How much slower?" Your descriptions of the ins and
outs of it,
and Apple's motivations, are definitely interesting. But even
if nothing
else, Manu's certainly right about one thing: What we need is
some hard
empirical data.
Andrei posted a document some time back comparing an advanced RC
implementation with "the best GC", and it performed remarkably
well,
within 10%!
D does not have 'the best GC'. I doubt D's GC is within 10% of
'the best GC'.
In addition, my colleagues have reported no significant pain
working
with ARC on iOS, whereas Android developers are always crying
about
the GC by contrast.
I can visualise Walter's criticisms, but what I don't know is
whether
his criticisms are actually as costly as they may seem? I also
haven't
seen the compilers ability to eliminate or simplify that work,
and the
circumstances in which it fails. It's conceivable that simply
rearranging an access pattern slightly may offer the compiler
the
structure it needs to properly eliminate the redundant work.
The thing is, I don't know! I really don't know, and I don't
know any
practical way to experiment with this. D theoretically offers
many
opportunities for ARC optimisation that other languages don't
via it's
rich type system, so direct comparisons via O-C could probably
be
reasonably considered to be quite conservative.
Here's what I do know though; nobody has offered conception of
a GC
that may be acceptable on a memory limited device, and it's
also not
very acceptable just by nature (destructors are completely
broken;
should be removed (like C#), concentrated cost as opposed to
amortised
cost).
As far as I know, there is NO OTHER CHOICE.
Either somebody invents the fantasy GC, or we actually
*experiment* with ARC...
We know: GC is unacceptable, and nobody has any idea how to
make one that is.
We don't know: ARC is acceptable/unacceptable. Why.
What other position can I take on this issue?
Check out the compiler and start the experiment you keep talking
about.
Cheers,
ed