On Monday, 5 May 2014 at 08:04:24 UTC, Jonathan M Davis via Digitalmars-d wrote:
On Mon, 05 May 2014 07:39:13 +0000
Paulo Pinto via Digitalmars-d <[email protected]> wrote:
Sometimes I wonder how much money have C design decisions cost
the industry in terms of anti-virus, static and dynamic analyzers tools, operating systems security enforcements, security research
and so on.

All avoidable with bound checking by default and no implicit
conversions between arrays and pointers.

Well, a number of years ago, the folks who started the codebase of the larger products at the company I work at insisted on using COM everywhere, because we _might_ have to interact with 3rd parties, and they _might_ not want to use C++. So, foolishly, they mandated that _nowhere_ in the codebase should any C++ objects be passed around except by pointer. They then had manual reference counting on top of that to deal with memory management. That decision has cost us man _years_ in time working on reference counting-related bugs. Simply using smart pointers instead would probably have saved the company millions. COM may have its place, but forcing a whole C++ codebase to function that way was just stupid, especially when pretty much none of it ever had to interact directly with 3rd party code (and even if it had, it should have been done through strictly defined wrapper libraries; it doesn't make sense that 3rd
parties would hook into the middle of your codebase).


So the decision was made and CComPtr<> and _com_ptr_t<> weren't used?!

I feel your pain. Well, now the codebase is WinRT ready. :)


Seemingly simple decisions can have _huge_ consequences - especially when that decision affects millions of lines of code, and that's definitely the case with some of the decisions made for C. Some of them may have be unavoidable give the hardware situation and programming climate at the time that C was created, but we've been paying for them ever since. And unfortunately, the way things are going at this point, nothing will ever really overthrow C. We'll have to deal with it on some level for a long, long time to come.

- Jonathan M Davis


I doubt those decisions really made sense, given the other system programming languages at the time. Most of them did bounds checking, had no implicit conversions and operating systems were being written with them.

Algol 60 reference compiler did not allow disabling bounds checking at all, for example.

Quote from Tony Hoare's ACM award article[1]:
"A consequence of this principle is that every occurrence of
every subscript of every subscripted variable was on every occasion
checked at run time against both the upper and the lower declared
bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged
us not to--they already knew how frequently subscript errors occur
on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and
users have not learned this lesson. In any respectable branch of
engineering, failure to observe such elementary precautions would have
long been against the law."


[1]http://www.labouseur.com/projects/codeReckon/papers/The-Emperors-Old-Clothes.pdf

Reply via email to