@Stefan_Salewski \- The idea of defaulting to an optimizing mode has come up before. It seems contrary to what almost any user coming from a "compiled language" world would expect. For various reasons (debugability, compile-time performance, etc.), almost all compilers default to a non-optimized (or very weakly optimized) output and allow various knobs to crank up optimization, as does the current `nim` setup.
There is even a whole dark art of "best optimization flags" which can be taken [to severe extremes](https://github.com/Acovea/libacovea). More simply/efficiently/some might say intelligently, you can often use PGO [https://forum.nim-lang.org/t/6295](https://forum.nim-lang.org/t/6295) to get 1.25..2.0x boosts on object code generated from nim-generated C. Some flags like `-ffast-math` can even change semantics in subtle ways that can impact correctness or not depending on use cases. I don't know what to do about people publicizing misleading benchmarks. That seems an ineliminable hazard, not only for Nim, but actually **_everywhere and all the time_** , and often not on purpose (though most "marketing" wants to do strawman comparisons on purpose). Besides compiling wrong, they could also use a bad algorithm, misrepresentative inputs, weird machines, benchmarks naive relative to intended measurements, and probably several other mistake categories. :-/ The best strategy may be trying to be supportive where & when we can, educating as we go, though I agree/admit that is a neverending battle.
