On Wednesday, 12 March 2014 at 22:50:00 UTC, Walter Bright wrote:
The argument for final by default, as eloquently expressed by Manu, is a good one. Even Andrei agrees with it (!).

The trouble, however, was illuminated most recently by the std.json regression that broke existing code. The breakage wasn't even intentional; it was a mistake. The user fix was also simple, just a tweak here and there to user code, and the compiler pointed out where each change needed to be made.

But we nearly lost a major client over it.

Was this because of a breaking change itself or because of the lack of warning and nature of the change?

The final by default change should not be something that catches anyone by surprise. There would be lots of time to prepare for it, warnings would be made, and an entire deprecation process gone through. It would not be a random compiler change that breaks code by surprise for no end-user benefit. When warnings start occurring, the compiler can quite clearly tell you *exactly* what you need to change to make your code up-to-date. And in the end, there is a net benefit to the user in the form of less error-prone, faster, code.

I used to get frustrated when my code would randomly break every compiler update (and it shows how much D has progressed that regressions in my own code are now a rare occurrence), but unexpected regressions such as the std.json regression are much different from intended changes with plenty of time and warning that provide an overall (even if slight in many cases) benefit to the end-user.

Reply via email to