On 01/06/15 18:40, Steven Schveighoffer wrote:
On 5/30/15 2:38 PM, Shachar Shemesh wrote:

So given that a compiler actually *works* (i.e. produces valid
binaries), is speed of compilation better than speed of execution of the
resulting binary?
There is no answer to that question.

During development stage, there are many steps that have "compile" as a hard start/end barrier (i.e. - you have to finish a task before compile start, and cannot continue it until compile ends). During those stages, the difference between 1 and 10 minute compile is the difference between 1 and 10 bugs solved in a day. It is a huge difference, and one it is worth sacrificing any amount of run time efficiency to pay, assuming this is a tradeoff you can later make.

Then again, when a release build is being prepared, the difference becomes moot. Even your "outrageous" figures become acceptable, so long as you can be sure that no bugs pop up in this build that did not exist in the non-optimized build.

Then again, please bear in mind that our product is somewhat atypical. Most actual products in the market are not CPU bound on algorithmic code. When that's the case, the optimization stage (beyond the most basic inlining stuff) will rarely give you 20% overall speed increase. When your code performs a system call every 40 assembly instructions, there simply isn't enough room for the optimizer to work its magic.

One exception to that above rule is where it hurts. Benchmarks, typically, do rely on algorithmic code to a large extent.

Shachar

Reply via email to