Walter Bright wrote:
dsimcha wrote:
I personally am a scientist (bioinformatics specifically) and I think having basic complexity management in your code is worthwhile even at fairly small project sizes. I learned this the hard way. For anything over about 100 lines I want some modularity (classes/structs, higher order functions, arrays that are more than a pointer + a convention, etc.) so that I can tweak my scientific app easily. Furthermore, multithreading is absolutely essential for some of the stuff I do,
since it's embarrassingly parallel, and is a huge PITA in C.


When I was working on converting Optlink to C, I thought long and hard about why C instead of D. The only, and I mean only, reason to do it via C was because part of the build process for Optlink used old tools that did not recognize newer features of the OMF that D outputs.

Once it is all in C, the old build system can be dispensed with, and then it can be easily converted to D.

If you want to, you can literally write code in D that is line-for-line nearly identical to C, and it will compile to the same code, and will perform the same.

You can do the same with C++ - Linus surely knows this, but I suspect he didn't want to use C++ because sure as shinola, members of his dev team would start using operator overloading, virtual base classes, etc.

Well, if you ask the question "what's C++'s biggest mistake?" it's much more difficult. C++'s failure to specify the ABI is enough of a reason to use C instead, I reckon. It think it's an appalling, inexcusable mistake -- it guaranteed compiled libraries 20 years later would use extern(C), not extern(C++). And that's not the worst C++ mistake.

Reply via email to