I'll summarise my arguments, though I've done this at least 3 times now.
Sorry, I 'value' more of my points than you, so my summary is quite
longer.
These are all supporting reasons why I think it would be a good change,
and
naturally, some are of lower significance then than others.
I'd like to think that most of them should be objectively rejected, or
the
counter arguments list grows in size a whole lot to justify the insults
you
offer:
* At top level I believe D aspires to be a systems language, and
performance should certainly be a key concern.
- 'Flexibility' [at the expense of performance] should be opt-in. It
comes at the expense of what I presume should be a core audience for a
systems language.
- x86 is the most tolerant architecture _by far_, and we're committed
to
a cost that isn't even known yet on the vast majority of computers in the
world.
* virtual is a one-way trip. It can't be undone without risking breaking
code once released to the wild. How can that state be a sensible default?
- Can not be un-done by the compiler/linker like it can in other
(dynamic) languages. No sufficiently smart compiler can ever address this
problem as an optimisation.
* The result of said performance concern has a cost in time and money for
at least one of D's core audiences (realtime systems programming).
- I don't believe the converse case, final-by-default, would present
any
comparative loss for users that want to write 'virtual:' at the top of
their class.
- 'Opportunistic de-virtualisation' is a time consuming and tedious
process, and tends to come up only during crunch times.
* "Classes routinely should make most methods final because it's hard to
imagine why one would override [the intended] few. Since those are a
minority, it's so much the better to make final the default."
- The majority of classes are leaf's, and there's no reason for leaf
methods to be virtual by default. Likewise, most methods are trivial
accessors (the most costly) which have no business being virtual either.
- It's also self-documenting. It makes it clear to a customer how the
API
is to be used.
* Libraries written in D should hope to be made available to the widest
audience possible.
- Library author's certainly don't consider (or care about) everyone's
usage cases, but they often write useful code that many people want to
make
use of. This is the definition of a library.
- They are almost certainly not going to annotate their classes with
lots
of 'final'.
- Given hard experience, when asked to revoke virtual, even if authors
agree in principle, they will refuse to do it given the risk of breakage
for unknown customers.
- Adding final as an optimisation is almost always done post-release,
so
it will almost always run the risk of breaking someones code somewhere.
* Experience has shown that programmers from C++/C# don't annotate
'final'
even when they know they should. Users from Java don't do it either, but
mainly because they don't consider it important.
- Note: 'the most performance conscious users' that you refer to are
often not the ones writing the code. Programmers work in teams, sometimes
those teams are large, and many programmers are inexperienced.
* final-by-default promotes awareness of virtual-ness, and it's
associated
costs.
- If it's hidden, it will soon be forgotten or dismissed as a trivial
detail. It's not... at least, not in a systems language that attracts
high-frequency programmers.
* 'Flexibility' may actually be a fallacy anyway. I personally like the
idea of requiring an explicit change to 'virtual' in the base when a new
and untested usage pattern is to be exploited, it gives me confidence.
- People are usually pretty permissive when marking functions virtual
in
C++, and people like to consider many possibilities.
- When was the last time you wanted to override a function in C++,
but
the author didn't mark it virtual? Is there actually a reduction in
flexibility in practise? Is this actually a frequent reality?
- Overriding unintended functions may lead to dangerous behaviours
never
considered by the author in the first place.
- How can I be confident in an API when I know the author couldn't
have
possibly tested all obscure possibilities available. And how can I know
the
extent of his consideration of usage scenarios when authoring the class?
- At best, my obscure use case has never been tested.
- 'virtual is self-documenting, succinctly communicating the authors
design/intent.
* Bonus: Improve interoperability with C++, which I will certainly
appreciate, but this point manifested from the D-DFE guys at dconf.
And I'll summarise my perception of the counter arguments argument:
* It's a breaking change.
* 'Flexibility'; someone somewhere might want to make use of a class in a
creative way that wasn't intended or tested. They shouldn't be prohibited
from this practise _by default_, in principle.
- They would have to contact the author to request a method be made
virtual in the unlikely event that source isn't available, and they want
to
use it in some obscure fashion that the author never considered.
- Note: This point exists on both sides, but on this side, the author
is likely to be accommodating to their requests.
- Authors would have to write 'virtual:' if they want to offer this
style
of fully extensible class.