> Nim with the C backend can never be faster than C.

What we are actually comparing here is _typical_ code written in C to typical 
code written in Nim. There are many examples where Nim can win out:

  * _Typical_ C code is often actually partly / mostly C++, which is [often 
slower](https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=gcc&lang2=gpp)
 than _actual_ C. ;)
  * For many programs the most important performance gains come from 
parallelism, which is a nightmare with _typical_ C code but a lot easier to get 
right with Nim.
  * _Typical_ C code is often only used with one compiler, and a user could 
have non-trivial problems using a different compiler. For example, the Linux 
kernel had so many GCC-isms that getting it to compile with Clang is an 
on-going multi-year saga. Nim, on the other hand, spits out fairly compatible 
code, so you can benchmark various C/C++ compilers (including proprietary ones) 
on different platforms to get the best binaries.
  * _Typical_ C code is constrained in its use of third party libraries, 
because C/C++ doesn't have modern module management (like pip or nimble). On 
different OS'es you'd use a different package manager to install those 
dependency sources / libs, and there can be version conflicts, etc. That makes 
programmers more shy about adding dependencies, fearing a longer, uglier, and 
more breakage-prone build process. If you're using Boost coroutines, you're 
more likely to use Boost's JSON parsing too, even if it's not the fastest. With 
pip (and eventually nimble), on the other hand, you're able to install many 
different JSON libraries in one command and then decide which one is fastest / 
best for your program. When libraries compete, you win.
  * Another example of an optimization feature that is difficult in C so it's 
not used to its full potential is code generation (automatic programming). It's 
used a lot more in Go, and I think Nim can go a lot further. Sometimes it even 
makes sense (for performance reasons) to do _all_ configuration and compile 
time and embed _all_ your data inside the executable, which means there's less 
to do at run-time. Etc.



> Why should Nim be larger, more ram intensive or need more cpu?

I said those estimates were pessimistic. Playing "devil's advocate". Benchmarks 
vary [(ex)](https://github.com/kostya/benchmarks). And of course Nim's 
performance is more likely to see significant improvements in the future than 
C/C++, so the gap will narrow.

> There is no "Python can be as fast as C++" if you pay more money for the 
> hardware as you could pay that for C++ too

The money (time is money) you spend writing it in C/C++ instead of Python is 
money you don't spend on more execution efficiency (electricity / local 
hardware / more cloud hosting resources / compensating for potential customers 
lost due to minimum system requirements).

What I'm saying is that we should measure in economic units as a common 
denominator for all trade-offs. In terms of "total cost of ownership", hardware 
and electricity are usually far behind the value of development time. Nim 
offers likely a little less execution efficiency than C in exchange for A LOT 
more value in developer time, flexibility, and safety.

> And there is no "something is fast enough" besides of having no timeframe for 
> execution, which is seldom if ever.

I find that to be the case quite often.

If your game is for PS4, there's a barrier to entry - there's no such thing as 
a PS4 with less than 8GB RAM, so the difference between your game using 4GB and 
5GB is inconsequential to your bottom line. Multitasking is not really an issue 
for games. If a Windows game runs slowly because the antivirus is doing a full 
scan in the background, you won't lose many sales for telling the user to pause 
the scan.

For back-end, there are often bundles: for example, I need 8GB RAM to fit my 
database, and on [DigitalOcean](https://www.digitalocean.com/pricing/) that 
gets me a lot more CPU power and transfer than I need for free.

> Which leads to my impression that you ignore that we kinda reached the single 
> core speed limit for CPUs [...]

Nah. Terahertz 
[Graphene](http://forum.nim-lang.org///www.extremetech.com/extreme/175727-ibm-builds-graphene-chip-thats-10000-times-faster-using-standard-cmos-processes)
 ([3D](https://en.wikipedia.org/wiki/Three-dimensional_integrated_circuit)) 
chips are coming... :D

I agree that doesn't mean that execution efficiency is unimportant, but it's 
often a low priority. In most situations, human effort is a much greater value.

> I personally even hate this "the hardware today is so fast, lets waste it 
> with suboptimal coding because developer time cost more than hardware".

So do I, but there are trade-offs. Software bloat is the price we pay for 
cheaper (as in free) software. Having software written in a higher-level 
language also means it's easier for me to read and tweak the code. 

Reply via email to